Test Report: Docker_Linux_crio 21772

                    
                      efb80dd6659b26178e36f8b49f3cb836e30a0156:2025-10-19:41980
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.24
35 TestAddons/parallel/Registry 12.92
36 TestAddons/parallel/RegistryCreds 0.39
37 TestAddons/parallel/Ingress 147.12
38 TestAddons/parallel/InspektorGadget 6.24
39 TestAddons/parallel/MetricsServer 5.3
41 TestAddons/parallel/CSI 49.04
42 TestAddons/parallel/Headlamp 2.49
43 TestAddons/parallel/CloudSpanner 5.26
44 TestAddons/parallel/LocalPath 8.1
45 TestAddons/parallel/NvidiaDevicePlugin 5.25
46 TestAddons/parallel/Yakd 6.3
47 TestAddons/parallel/AmdGpuDevicePlugin 6.24
98 TestFunctional/parallel/ServiceCmdConnect 602.89
120 TestFunctional/parallel/ServiceCmd/DeployApp 600.6
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.03
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.81
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.22
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.34
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
154 TestFunctional/parallel/ServiceCmd/Format 0.53
155 TestFunctional/parallel/ServiceCmd/URL 0.53
191 TestJSONOutput/pause/Command 2.32
197 TestJSONOutput/unpause/Command 1.88
285 TestPause/serial/Pause 5.93
346 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.19
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.37
358 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.06
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.08
371 TestStartStop/group/old-k8s-version/serial/Pause 7.13
373 TestStartStop/group/no-preload/serial/Pause 6.99
380 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.98
383 TestStartStop/group/embed-certs/serial/Pause 5.89
386 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.18
392 TestStartStop/group/newest-cni/serial/Pause 5.63
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable volcano --alsologtostderr -v=1: exit status 11 (239.902675ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:08:02.412710  364891 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:08:02.412816  364891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:02.412822  364891 out.go:374] Setting ErrFile to fd 2...
	I1019 12:08:02.412826  364891 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:02.412995  364891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:08:02.413231  364891 mustload.go:65] Loading cluster: addons-042725
	I1019 12:08:02.413569  364891 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:02.413587  364891 addons.go:606] checking whether the cluster is paused
	I1019 12:08:02.413661  364891 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:02.413674  364891 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:08:02.414021  364891 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:08:02.432728  364891 ssh_runner.go:195] Run: systemctl --version
	I1019 12:08:02.432789  364891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:08:02.450300  364891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:08:02.545407  364891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:08:02.545507  364891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:08:02.575202  364891 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:08:02.575223  364891 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:08:02.575226  364891 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:08:02.575229  364891 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:08:02.575232  364891 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:08:02.575235  364891 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:08:02.575238  364891 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:08:02.575240  364891 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:08:02.575245  364891 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:08:02.575255  364891 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:08:02.575258  364891 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:08:02.575261  364891 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:08:02.575264  364891 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:08:02.575266  364891 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:08:02.575269  364891 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:08:02.575273  364891 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:08:02.575276  364891 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:08:02.575285  364891 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:08:02.575287  364891 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:08:02.575289  364891 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:08:02.575295  364891 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:08:02.575300  364891 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:08:02.575303  364891 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:08:02.575305  364891 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:08:02.575308  364891 cri.go:89] found id: ""
	I1019 12:08:02.575343  364891 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:08:02.589685  364891 out.go:203] 
	W1019 12:08:02.590783  364891 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:08:02.590799  364891 out.go:285] * 
	* 
	W1019 12:08:02.594845  364891 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:08:02.596206  364891 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.505076ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-98h42" [95130b7b-05dc-4919-a9ab-5159f9e85c82] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002825178s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-wlzbz" [172ed291-7498-4487-9cd8-04ca84123237] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003250868s
addons_test.go:392: (dbg) Run:  kubectl --context addons-042725 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-042725 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-042725 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.443855633s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 ip
2025/10/19 12:08:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable registry --alsologtostderr -v=1: exit status 11 (255.719975ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:08:23.076198  367467 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:08:23.076481  367467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:23.076493  367467 out.go:374] Setting ErrFile to fd 2...
	I1019 12:08:23.076500  367467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:23.076769  367467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:08:23.077123  367467 mustload.go:65] Loading cluster: addons-042725
	I1019 12:08:23.077527  367467 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:23.077548  367467 addons.go:606] checking whether the cluster is paused
	I1019 12:08:23.077686  367467 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:23.077705  367467 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:08:23.078230  367467 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:08:23.098961  367467 ssh_runner.go:195] Run: systemctl --version
	I1019 12:08:23.099010  367467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:08:23.120842  367467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:08:23.223075  367467 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:08:23.223167  367467 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:08:23.252068  367467 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:08:23.252097  367467 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:08:23.252103  367467 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:08:23.252107  367467 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:08:23.252111  367467 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:08:23.252116  367467 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:08:23.252120  367467 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:08:23.252123  367467 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:08:23.252127  367467 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:08:23.252148  367467 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:08:23.252152  367467 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:08:23.252156  367467 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:08:23.252160  367467 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:08:23.252164  367467 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:08:23.252168  367467 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:08:23.252185  367467 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:08:23.252196  367467 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:08:23.252203  367467 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:08:23.252207  367467 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:08:23.252211  367467 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:08:23.252214  367467 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:08:23.252218  367467 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:08:23.252221  367467 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:08:23.252227  367467 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:08:23.252232  367467 cri.go:89] found id: ""
	I1019 12:08:23.252288  367467 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:08:23.265927  367467 out.go:203] 
	W1019 12:08:23.267414  367467 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:08:23.267452  367467 out.go:285] * 
	* 
	W1019 12:08:23.271669  367467 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:08:23.273252  367467 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (12.92s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.39s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.117354ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-042725
addons_test.go:332: (dbg) Run:  kubectl --context addons-042725 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (234.297992ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:08:23.480236  367605 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:08:23.480543  367605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:23.480556  367605 out.go:374] Setting ErrFile to fd 2...
	I1019 12:08:23.480562  367605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:23.480774  367605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:08:23.481068  367605 mustload.go:65] Loading cluster: addons-042725
	I1019 12:08:23.481442  367605 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:23.481465  367605 addons.go:606] checking whether the cluster is paused
	I1019 12:08:23.481570  367605 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:23.481595  367605 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:08:23.481996  367605 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:08:23.500270  367605 ssh_runner.go:195] Run: systemctl --version
	I1019 12:08:23.500331  367605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:08:23.517333  367605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:08:23.611995  367605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:08:23.612085  367605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:08:23.642464  367605 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:08:23.642498  367605 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:08:23.642505  367605 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:08:23.642510  367605 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:08:23.642514  367605 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:08:23.642520  367605 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:08:23.642524  367605 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:08:23.642529  367605 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:08:23.642533  367605 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:08:23.642542  367605 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:08:23.642547  367605 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:08:23.642551  367605 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:08:23.642555  367605 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:08:23.642559  367605 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:08:23.642563  367605 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:08:23.642574  367605 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:08:23.642582  367605 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:08:23.642589  367605 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:08:23.642599  367605 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:08:23.642603  367605 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:08:23.642607  367605 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:08:23.642611  367605 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:08:23.642615  367605 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:08:23.642619  367605 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:08:23.642623  367605 cri.go:89] found id: ""
	I1019 12:08:23.642671  367605 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:08:23.659456  367605 out.go:203] 
	W1019 12:08:23.660897  367605 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:08:23.660922  367605 out.go:285] * 
	* 
	W1019 12:08:23.664988  367605 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:08:23.666567  367605 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.39s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-042725 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-042725 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-042725 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [eb22ef25-98bf-41c5-81e6-4ad4ab209f42] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [eb22ef25-98bf-41c5-81e6-4ad4ab209f42] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003588819s
I1019 12:08:30.308930  355262 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.681732876s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-042725 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-042725
helpers_test.go:243: (dbg) docker inspect addons-042725:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4",
	        "Created": "2025-10-19T12:05:45.305517142Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 357254,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:05:45.341931582Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4/hosts",
	        "LogPath": "/var/lib/docker/containers/f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4/f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4-json.log",
	        "Name": "/addons-042725",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-042725:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-042725",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4",
	                "LowerDir": "/var/lib/docker/overlay2/a64981fbd7acf47b0c8941e1289b39bd94c3acbccb56f6d65603f5ef7ee03fe8-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a64981fbd7acf47b0c8941e1289b39bd94c3acbccb56f6d65603f5ef7ee03fe8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a64981fbd7acf47b0c8941e1289b39bd94c3acbccb56f6d65603f5ef7ee03fe8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a64981fbd7acf47b0c8941e1289b39bd94c3acbccb56f6d65603f5ef7ee03fe8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-042725",
	                "Source": "/var/lib/docker/volumes/addons-042725/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-042725",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-042725",
	                "name.minikube.sigs.k8s.io": "addons-042725",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "62e03c13e2e6bec5ee9197f03f522bee707bae2e6d6e6af712f0f688e2de996c",
	            "SandboxKey": "/var/run/docker/netns/62e03c13e2e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-042725": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:58:af:55:9d:76",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "72895bb5262d44434cac86093316b6324cc823786d71e0451c062b6c4dad043c",
	                    "EndpointID": "f7da72f0e5832dc751a154b659d2ce0ff9de14d2eac9969f1add0e403856235c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-042725",
	                        "f0962584dd5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-042725 -n addons-042725
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-042725 logs -n 25: (1.155063887s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-904842 --alsologtostderr --binary-mirror http://127.0.0.1:34101 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-904842 │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │                     │
	│ delete  │ -p binary-mirror-904842                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-904842 │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:05 UTC │
	│ addons  │ enable dashboard -p addons-042725                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │                     │
	│ addons  │ disable dashboard -p addons-042725                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │                     │
	│ start   │ -p addons-042725 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:08 UTC │
	│ addons  │ addons-042725 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ addons-042725 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ enable headlamp -p addons-042725 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ addons-042725 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ addons-042725 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ addons-042725 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ ssh     │ addons-042725 ssh cat /opt/local-path-provisioner/pvc-275508de-1e47-445a-b7b2-b1fe712e92c0_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │ 19 Oct 25 12:08 UTC │
	│ addons  │ addons-042725 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ addons-042725 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ ip      │ addons-042725 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │ 19 Oct 25 12:08 UTC │
	│ addons  │ addons-042725 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-042725                                                                                                                                                                                                                                                                                                                                                                                           │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │ 19 Oct 25 12:08 UTC │
	│ addons  │ addons-042725 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ addons-042725 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ addons-042725 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ ssh     │ addons-042725 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ addons-042725 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ addons-042725 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:09 UTC │                     │
	│ addons  │ addons-042725 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:09 UTC │                     │
	│ ip      │ addons-042725 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-042725        │ jenkins │ v1.37.0 │ 19 Oct 25 12:10 UTC │ 19 Oct 25 12:10 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:05:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:05:22.402114  356592 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:05:22.402364  356592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:05:22.402373  356592 out.go:374] Setting ErrFile to fd 2...
	I1019 12:05:22.402377  356592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:05:22.402558  356592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:05:22.403073  356592 out.go:368] Setting JSON to false
	I1019 12:05:22.403984  356592 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6470,"bootTime":1760869052,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:05:22.404061  356592 start.go:141] virtualization: kvm guest
	I1019 12:05:22.405823  356592 out.go:179] * [addons-042725] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:05:22.407575  356592 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:05:22.407585  356592 notify.go:220] Checking for updates...
	I1019 12:05:22.409770  356592 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:05:22.410950  356592 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:05:22.412145  356592 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:05:22.413523  356592 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:05:22.414649  356592 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:05:22.415977  356592 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:05:22.438652  356592 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:05:22.438742  356592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:05:22.492153  356592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-19 12:05:22.482164439 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:05:22.492249  356592 docker.go:318] overlay module found
	I1019 12:05:22.494014  356592 out.go:179] * Using the docker driver based on user configuration
	I1019 12:05:22.495123  356592 start.go:305] selected driver: docker
	I1019 12:05:22.495135  356592 start.go:925] validating driver "docker" against <nil>
	I1019 12:05:22.495146  356592 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:05:22.495751  356592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:05:22.550628  356592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-19 12:05:22.541359516 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:05:22.550791  356592 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:05:22.550998  356592 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:05:22.552697  356592 out.go:179] * Using Docker driver with root privileges
	I1019 12:05:22.553879  356592 cni.go:84] Creating CNI manager for ""
	I1019 12:05:22.553940  356592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:05:22.553951  356592 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:05:22.554010  356592 start.go:349] cluster config:
	{Name:addons-042725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-042725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1019 12:05:22.555200  356592 out.go:179] * Starting "addons-042725" primary control-plane node in "addons-042725" cluster
	I1019 12:05:22.556225  356592 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:05:22.557328  356592 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:05:22.558392  356592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:05:22.558460  356592 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:05:22.558466  356592 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:05:22.558487  356592 cache.go:58] Caching tarball of preloaded images
	I1019 12:05:22.558604  356592 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:05:22.558620  356592 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:05:22.558960  356592 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/config.json ...
	I1019 12:05:22.558991  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/config.json: {Name:mk683788e7d3d89c0ee0bc8e7707ffe5a1bcd2b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:22.575359  356592 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 12:05:22.575522  356592 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 12:05:22.575543  356592 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1019 12:05:22.575548  356592 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1019 12:05:22.575555  356592 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1019 12:05:22.575561  356592 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1019 12:05:34.775726  356592 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1019 12:05:34.775774  356592 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:05:34.775813  356592 start.go:360] acquireMachinesLock for addons-042725: {Name:mk2d91f51d8b1754188cdced2792e6e9ca0fe32c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:05:34.775931  356592 start.go:364] duration metric: took 90.196µs to acquireMachinesLock for "addons-042725"
	I1019 12:05:34.775964  356592 start.go:93] Provisioning new machine with config: &{Name:addons-042725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-042725 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:05:34.776040  356592 start.go:125] createHost starting for "" (driver="docker")
	I1019 12:05:34.777640  356592 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1019 12:05:34.777898  356592 start.go:159] libmachine.API.Create for "addons-042725" (driver="docker")
	I1019 12:05:34.777936  356592 client.go:168] LocalClient.Create starting
	I1019 12:05:34.778069  356592 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem
	I1019 12:05:35.131911  356592 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem
	I1019 12:05:35.373857  356592 cli_runner.go:164] Run: docker network inspect addons-042725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:05:35.391374  356592 cli_runner.go:211] docker network inspect addons-042725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:05:35.391467  356592 network_create.go:284] running [docker network inspect addons-042725] to gather additional debugging logs...
	I1019 12:05:35.391495  356592 cli_runner.go:164] Run: docker network inspect addons-042725
	W1019 12:05:35.408546  356592 cli_runner.go:211] docker network inspect addons-042725 returned with exit code 1
	I1019 12:05:35.408580  356592 network_create.go:287] error running [docker network inspect addons-042725]: docker network inspect addons-042725: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-042725 not found
	I1019 12:05:35.408597  356592 network_create.go:289] output of [docker network inspect addons-042725]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-042725 not found
	
	** /stderr **
	I1019 12:05:35.408732  356592 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:05:35.426338  356592 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cd8db0}
	I1019 12:05:35.426378  356592 network_create.go:124] attempt to create docker network addons-042725 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1019 12:05:35.426440  356592 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-042725 addons-042725
	I1019 12:05:35.481999  356592 network_create.go:108] docker network addons-042725 192.168.49.0/24 created
	I1019 12:05:35.482038  356592 kic.go:121] calculated static IP "192.168.49.2" for the "addons-042725" container
	I1019 12:05:35.482102  356592 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:05:35.499467  356592 cli_runner.go:164] Run: docker volume create addons-042725 --label name.minikube.sigs.k8s.io=addons-042725 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:05:35.517137  356592 oci.go:103] Successfully created a docker volume addons-042725
	I1019 12:05:35.517209  356592 cli_runner.go:164] Run: docker run --rm --name addons-042725-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-042725 --entrypoint /usr/bin/test -v addons-042725:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:05:40.947214  356592 cli_runner.go:217] Completed: docker run --rm --name addons-042725-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-042725 --entrypoint /usr/bin/test -v addons-042725:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (5.429946825s)
	I1019 12:05:40.947250  356592 oci.go:107] Successfully prepared a docker volume addons-042725
	I1019 12:05:40.947278  356592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:05:40.947297  356592 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:05:40.947362  356592 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-042725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 12:05:45.234525  356592 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-042725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.287098266s)
	I1019 12:05:45.234561  356592 kic.go:203] duration metric: took 4.287258224s to extract preloaded images to volume ...
	W1019 12:05:45.234676  356592 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 12:05:45.234715  356592 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 12:05:45.234766  356592 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 12:05:45.290457  356592 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-042725 --name addons-042725 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-042725 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-042725 --network addons-042725 --ip 192.168.49.2 --volume addons-042725:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 12:05:45.550560  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Running}}
	I1019 12:05:45.567977  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:05:45.585336  356592 cli_runner.go:164] Run: docker exec addons-042725 stat /var/lib/dpkg/alternatives/iptables
	I1019 12:05:45.629896  356592 oci.go:144] the created container "addons-042725" has a running status.
	I1019 12:05:45.629931  356592 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa...
	I1019 12:05:45.862628  356592 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 12:05:45.890026  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:05:45.911140  356592 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 12:05:45.911166  356592 kic_runner.go:114] Args: [docker exec --privileged addons-042725 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 12:05:45.956386  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:05:45.974321  356592 machine.go:93] provisionDockerMachine start ...
	I1019 12:05:45.974416  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:45.992969  356592 main.go:141] libmachine: Using SSH client type: native
	I1019 12:05:45.993208  356592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:05:45.993221  356592 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:05:46.127217  356592 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-042725
	
	I1019 12:05:46.127251  356592 ubuntu.go:182] provisioning hostname "addons-042725"
	I1019 12:05:46.127333  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:46.146050  356592 main.go:141] libmachine: Using SSH client type: native
	I1019 12:05:46.146361  356592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:05:46.146385  356592 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-042725 && echo "addons-042725" | sudo tee /etc/hostname
	I1019 12:05:46.289886  356592 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-042725
	
	I1019 12:05:46.289953  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:46.309356  356592 main.go:141] libmachine: Using SSH client type: native
	I1019 12:05:46.309614  356592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:05:46.309632  356592 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-042725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-042725/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-042725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:05:46.441952  356592 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:05:46.441978  356592 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:05:46.442014  356592 ubuntu.go:190] setting up certificates
	I1019 12:05:46.442027  356592 provision.go:84] configureAuth start
	I1019 12:05:46.442081  356592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-042725
	I1019 12:05:46.459541  356592 provision.go:143] copyHostCerts
	I1019 12:05:46.459612  356592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:05:46.459732  356592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:05:46.459792  356592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:05:46.459905  356592 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.addons-042725 san=[127.0.0.1 192.168.49.2 addons-042725 localhost minikube]
	I1019 12:05:47.016316  356592 provision.go:177] copyRemoteCerts
	I1019 12:05:47.016386  356592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:05:47.016439  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:47.033986  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:05:47.128371  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 12:05:47.146531  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:05:47.163327  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:05:47.179976  356592 provision.go:87] duration metric: took 737.929126ms to configureAuth
	I1019 12:05:47.180001  356592 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:05:47.180167  356592 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:05:47.180266  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:47.197964  356592 main.go:141] libmachine: Using SSH client type: native
	I1019 12:05:47.198205  356592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:05:47.198233  356592 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:05:47.439172  356592 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:05:47.439208  356592 machine.go:96] duration metric: took 1.464857601s to provisionDockerMachine
	I1019 12:05:47.439221  356592 client.go:171] duration metric: took 12.661273606s to LocalClient.Create
	I1019 12:05:47.439248  356592 start.go:167] duration metric: took 12.661350449s to libmachine.API.Create "addons-042725"
	I1019 12:05:47.439260  356592 start.go:293] postStartSetup for "addons-042725" (driver="docker")
	I1019 12:05:47.439276  356592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:05:47.439356  356592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:05:47.439404  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:47.457237  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:05:47.554134  356592 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:05:47.557567  356592 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:05:47.557606  356592 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:05:47.557620  356592 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:05:47.557676  356592 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:05:47.557703  356592 start.go:296] duration metric: took 118.432853ms for postStartSetup
	I1019 12:05:47.557973  356592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-042725
	I1019 12:05:47.574799  356592 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/config.json ...
	I1019 12:05:47.575062  356592 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:05:47.575101  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:47.591974  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:05:47.683342  356592 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:05:47.687782  356592 start.go:128] duration metric: took 12.911726122s to createHost
	I1019 12:05:47.687807  356592 start.go:83] releasing machines lock for "addons-042725", held for 12.911861976s
	I1019 12:05:47.687879  356592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-042725
	I1019 12:05:47.704631  356592 ssh_runner.go:195] Run: cat /version.json
	I1019 12:05:47.704678  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:47.704683  356592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:05:47.704760  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:47.722251  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:05:47.722589  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:05:47.865764  356592 ssh_runner.go:195] Run: systemctl --version
	I1019 12:05:47.871965  356592 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:05:47.905088  356592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:05:47.909579  356592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:05:47.909650  356592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:05:47.934301  356592 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 12:05:47.934330  356592 start.go:495] detecting cgroup driver to use...
	I1019 12:05:47.934368  356592 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:05:47.934441  356592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:05:47.950407  356592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:05:47.962410  356592 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:05:47.962481  356592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:05:47.978505  356592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:05:47.995545  356592 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:05:48.074725  356592 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:05:48.160056  356592 docker.go:234] disabling docker service ...
	I1019 12:05:48.160122  356592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:05:48.178795  356592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:05:48.190992  356592 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:05:48.271185  356592 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:05:48.348568  356592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:05:48.360746  356592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:05:48.374852  356592 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:05:48.374907  356592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.384778  356592 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:05:48.384845  356592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.393212  356592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.401417  356592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.409762  356592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:05:48.417399  356592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.425693  356592 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.438716  356592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.447060  356592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:05:48.454000  356592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:05:48.460782  356592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:05:48.535144  356592 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:05:48.638102  356592 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:05:48.638180  356592 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:05:48.641921  356592 start.go:563] Will wait 60s for crictl version
	I1019 12:05:48.641985  356592 ssh_runner.go:195] Run: which crictl
	I1019 12:05:48.645341  356592 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:05:48.668927  356592 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:05:48.669013  356592 ssh_runner.go:195] Run: crio --version
	I1019 12:05:48.696373  356592 ssh_runner.go:195] Run: crio --version
	I1019 12:05:48.725516  356592 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:05:48.726592  356592 cli_runner.go:164] Run: docker network inspect addons-042725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:05:48.742907  356592 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1019 12:05:48.746898  356592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:05:48.756755  356592 kubeadm.go:883] updating cluster {Name:addons-042725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-042725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:05:48.756871  356592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:05:48.756914  356592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:05:48.787541  356592 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:05:48.787563  356592 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:05:48.787612  356592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:05:48.812563  356592 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:05:48.812587  356592 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:05:48.812597  356592 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1019 12:05:48.812714  356592 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-042725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-042725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:05:48.812796  356592 ssh_runner.go:195] Run: crio config
	I1019 12:05:48.856827  356592 cni.go:84] Creating CNI manager for ""
	I1019 12:05:48.856863  356592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:05:48.856887  356592 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:05:48.856920  356592 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-042725 NodeName:addons-042725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:05:48.857067  356592 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-042725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:05:48.857140  356592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:05:48.865234  356592 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:05:48.865287  356592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:05:48.872778  356592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1019 12:05:48.884995  356592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:05:48.899280  356592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1019 12:05:48.910873  356592 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:05:48.914149  356592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:05:48.923401  356592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:05:49.002731  356592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:05:49.027657  356592 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725 for IP: 192.168.49.2
	I1019 12:05:49.027687  356592 certs.go:195] generating shared ca certs ...
	I1019 12:05:49.027709  356592 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.027839  356592 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:05:49.090535  356592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt ...
	I1019 12:05:49.090562  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt: {Name:mkd44fe82d6d6779a4a67d121d283099df4db026 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.090721  356592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key ...
	I1019 12:05:49.090732  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key: {Name:mk380494cdd431ba8cbb4d01406505021bbb0953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.090804  356592 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:05:49.262375  356592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt ...
	I1019 12:05:49.262406  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt: {Name:mkdf9176b4ad4411024ab0785072334d4363e41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.262576  356592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key ...
	I1019 12:05:49.262588  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key: {Name:mkd5ac799295c2b01a1de6ff9fdfeb6b58ec5937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.262655  356592 certs.go:257] generating profile certs ...
	I1019 12:05:49.262719  356592 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.key
	I1019 12:05:49.262733  356592 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt with IP's: []
	I1019 12:05:49.397758  356592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt ...
	I1019 12:05:49.397790  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: {Name:mk393a8dc45ccf6aae18a2f9497e245b173e789b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.397959  356592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.key ...
	I1019 12:05:49.397970  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.key: {Name:mk1235248ab232563a3bb7c23927a3348ed9ad9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.398046  356592 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.key.3a045047
	I1019 12:05:49.398065  356592 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.crt.3a045047 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1019 12:05:49.611799  356592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.crt.3a045047 ...
	I1019 12:05:49.611834  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.crt.3a045047: {Name:mk7dc1bdfb6eda20fd91773733d1306f7614411f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.611996  356592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.key.3a045047 ...
	I1019 12:05:49.612010  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.key.3a045047: {Name:mk25c52b11c31df91183d40f2c11556c73cb6972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.612081  356592 certs.go:382] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.crt.3a045047 -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.crt
	I1019 12:05:49.612197  356592 certs.go:386] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.key.3a045047 -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.key
	I1019 12:05:49.612265  356592 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.key
	I1019 12:05:49.612287  356592 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.crt with IP's: []
	I1019 12:05:49.827646  356592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.crt ...
	I1019 12:05:49.827675  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.crt: {Name:mkcc083d5799af1a3dbeac7ea5e0a3de01075ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.827847  356592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.key ...
	I1019 12:05:49.827860  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.key: {Name:mk7a4c4f5aa9871ccbc9fbf756b87b65d01a5e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.828049  356592 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:05:49.828083  356592 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:05:49.828106  356592 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:05:49.828128  356592 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:05:49.828778  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:05:49.846393  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:05:49.863483  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:05:49.880249  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:05:49.897109  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 12:05:49.913354  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 12:05:49.929938  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:05:49.946497  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:05:49.963145  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:05:49.981247  356592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:05:49.993240  356592 ssh_runner.go:195] Run: openssl version
	I1019 12:05:49.999020  356592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:05:50.009285  356592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:05:50.012914  356592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:05:50.012960  356592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:05:50.046562  356592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:05:50.055566  356592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:05:50.059043  356592 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 12:05:50.059108  356592 kubeadm.go:400] StartCluster: {Name:addons-042725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-042725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:05:50.059177  356592 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:05:50.059220  356592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:05:50.085335  356592 cri.go:89] found id: ""
	I1019 12:05:50.085407  356592 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:05:50.093475  356592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:05:50.100953  356592 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 12:05:50.101008  356592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:05:50.108371  356592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 12:05:50.108387  356592 kubeadm.go:157] found existing configuration files:
	
	I1019 12:05:50.108444  356592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 12:05:50.115606  356592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 12:05:50.115676  356592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 12:05:50.122469  356592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 12:05:50.129651  356592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 12:05:50.129692  356592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:05:50.136673  356592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 12:05:50.143734  356592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 12:05:50.143775  356592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:05:50.150801  356592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 12:05:50.157853  356592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 12:05:50.157902  356592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:05:50.164688  356592 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 12:05:50.200648  356592 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 12:05:50.200737  356592 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 12:05:50.220227  356592 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 12:05:50.220306  356592 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 12:05:50.220438  356592 kubeadm.go:318] OS: Linux
	I1019 12:05:50.220514  356592 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 12:05:50.220588  356592 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 12:05:50.220650  356592 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 12:05:50.220743  356592 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 12:05:50.220831  356592 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 12:05:50.220914  356592 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 12:05:50.221012  356592 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 12:05:50.221086  356592 kubeadm.go:318] CGROUPS_IO: enabled
	I1019 12:05:50.275803  356592 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 12:05:50.275950  356592 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 12:05:50.276069  356592 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 12:05:50.283625  356592 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 12:05:50.286355  356592 out.go:252]   - Generating certificates and keys ...
	I1019 12:05:50.286461  356592 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 12:05:50.286551  356592 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1019 12:05:50.453278  356592 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 12:05:50.586345  356592 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 12:05:50.979480  356592 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 12:05:51.129890  356592 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 12:05:51.667988  356592 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 12:05:51.668122  356592 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-042725 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 12:05:51.876369  356592 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 12:05:51.876568  356592 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-042725 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 12:05:51.892924  356592 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 12:05:51.961391  356592 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 12:05:52.057190  356592 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 12:05:52.057323  356592 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 12:05:52.174242  356592 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 12:05:52.447560  356592 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 12:05:52.659323  356592 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 12:05:52.772052  356592 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 12:05:52.899333  356592 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 12:05:52.899950  356592 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 12:05:52.903480  356592 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 12:05:52.904742  356592 out.go:252]   - Booting up control plane ...
	I1019 12:05:52.904866  356592 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 12:05:52.904972  356592 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 12:05:52.905721  356592 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 12:05:52.918783  356592 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 12:05:52.918905  356592 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 12:05:52.925204  356592 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 12:05:52.925548  356592 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 12:05:52.925594  356592 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 12:05:53.021904  356592 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 12:05:53.022050  356592 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 12:05:54.022574  356592 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000941629s
	I1019 12:05:54.025408  356592 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 12:05:54.025574  356592 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1019 12:05:54.025685  356592 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 12:05:54.025777  356592 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 12:05:55.241610  356592 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.215952193s
	I1019 12:05:55.886039  356592 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.860501155s
	I1019 12:05:57.527596  356592 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.502020548s
	I1019 12:05:57.538189  356592 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 12:05:57.547552  356592 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 12:05:57.555745  356592 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 12:05:57.556048  356592 kubeadm.go:318] [mark-control-plane] Marking the node addons-042725 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 12:05:57.564334  356592 kubeadm.go:318] [bootstrap-token] Using token: h8tkp4.5gchpu2ualu0x2ks
	I1019 12:05:57.565665  356592 out.go:252]   - Configuring RBAC rules ...
	I1019 12:05:57.565804  356592 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 12:05:57.568622  356592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 12:05:57.573338  356592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 12:05:57.575593  356592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 12:05:57.578795  356592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 12:05:57.581089  356592 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 12:05:57.934580  356592 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 12:05:58.347950  356592 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 12:05:58.932885  356592 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 12:05:58.933659  356592 kubeadm.go:318] 
	I1019 12:05:58.933730  356592 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 12:05:58.933754  356592 kubeadm.go:318] 
	I1019 12:05:58.933850  356592 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 12:05:58.933868  356592 kubeadm.go:318] 
	I1019 12:05:58.933909  356592 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 12:05:58.933991  356592 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 12:05:58.934069  356592 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 12:05:58.934079  356592 kubeadm.go:318] 
	I1019 12:05:58.934158  356592 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 12:05:58.934170  356592 kubeadm.go:318] 
	I1019 12:05:58.934208  356592 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 12:05:58.934214  356592 kubeadm.go:318] 
	I1019 12:05:58.934259  356592 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 12:05:58.934326  356592 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 12:05:58.934382  356592 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 12:05:58.934388  356592 kubeadm.go:318] 
	I1019 12:05:58.934528  356592 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 12:05:58.934619  356592 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 12:05:58.934630  356592 kubeadm.go:318] 
	I1019 12:05:58.934701  356592 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token h8tkp4.5gchpu2ualu0x2ks \
	I1019 12:05:58.934793  356592 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 \
	I1019 12:05:58.934815  356592 kubeadm.go:318] 	--control-plane 
	I1019 12:05:58.934822  356592 kubeadm.go:318] 
	I1019 12:05:58.934910  356592 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 12:05:58.934918  356592 kubeadm.go:318] 
	I1019 12:05:58.934983  356592 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token h8tkp4.5gchpu2ualu0x2ks \
	I1019 12:05:58.935068  356592 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 
	I1019 12:05:58.937647  356592 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 12:05:58.937754  356592 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 12:05:58.937769  356592 cni.go:84] Creating CNI manager for ""
	I1019 12:05:58.937777  356592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:05:58.940205  356592 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 12:05:58.941287  356592 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 12:05:58.945456  356592 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 12:05:58.945472  356592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 12:05:58.958457  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 12:05:59.155212  356592 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:05:59.155362  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:05:59.155405  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-042725 minikube.k8s.io/updated_at=2025_10_19T12_05_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=addons-042725 minikube.k8s.io/primary=true
	I1019 12:05:59.231534  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:05:59.240594  356592 ops.go:34] apiserver oom_adj: -16
	I1019 12:05:59.731737  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:00.232109  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:00.732668  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:01.231738  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:01.731905  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:02.231991  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:02.732557  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:03.231887  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:03.294305  356592 kubeadm.go:1113] duration metric: took 4.139025269s to wait for elevateKubeSystemPrivileges
	I1019 12:06:03.294350  356592 kubeadm.go:402] duration metric: took 13.235249068s to StartCluster
	I1019 12:06:03.294391  356592 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:06:03.294536  356592 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:06:03.294975  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:06:03.295171  356592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:06:03.295177  356592 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:06:03.295254  356592 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1019 12:06:03.295385  356592 addons.go:69] Setting yakd=true in profile "addons-042725"
	I1019 12:06:03.295405  356592 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-042725"
	I1019 12:06:03.295415  356592 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:06:03.295438  356592 addons.go:69] Setting registry=true in profile "addons-042725"
	I1019 12:06:03.295414  356592 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-042725"
	I1019 12:06:03.295454  356592 addons.go:238] Setting addon registry=true in "addons-042725"
	I1019 12:06:03.295429  356592 addons.go:238] Setting addon yakd=true in "addons-042725"
	I1019 12:06:03.295471  356592 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-042725"
	I1019 12:06:03.295441  356592 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-042725"
	I1019 12:06:03.295499  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.295504  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.295510  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.295523  356592 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-042725"
	I1019 12:06:03.295555  356592 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-042725"
	I1019 12:06:03.295576  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.295603  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.295951  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.296031  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.296043  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.296068  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.296117  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.296318  356592 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-042725"
	I1019 12:06:03.296342  356592 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-042725"
	I1019 12:06:03.296558  356592 addons.go:69] Setting registry-creds=true in profile "addons-042725"
	I1019 12:06:03.296616  356592 addons.go:238] Setting addon registry-creds=true in "addons-042725"
	I1019 12:06:03.296669  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.296917  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.296996  356592 addons.go:69] Setting cloud-spanner=true in profile "addons-042725"
	I1019 12:06:03.297018  356592 out.go:179] * Verifying Kubernetes components...
	I1019 12:06:03.297105  356592 addons.go:69] Setting volumesnapshots=true in profile "addons-042725"
	I1019 12:06:03.297126  356592 addons.go:238] Setting addon volumesnapshots=true in "addons-042725"
	I1019 12:06:03.297153  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.297197  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.297652  356592 addons.go:69] Setting ingress-dns=true in profile "addons-042725"
	I1019 12:06:03.297679  356592 addons.go:238] Setting addon ingress-dns=true in "addons-042725"
	I1019 12:06:03.297684  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.297715  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.297916  356592 addons.go:69] Setting inspektor-gadget=true in profile "addons-042725"
	I1019 12:06:03.297942  356592 addons.go:238] Setting addon inspektor-gadget=true in "addons-042725"
	I1019 12:06:03.297980  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.298139  356592 addons.go:69] Setting gcp-auth=true in profile "addons-042725"
	I1019 12:06:03.298165  356592 mustload.go:65] Loading cluster: addons-042725
	I1019 12:06:03.298227  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.298265  356592 addons.go:69] Setting storage-provisioner=true in profile "addons-042725"
	I1019 12:06:03.298295  356592 addons.go:238] Setting addon storage-provisioner=true in "addons-042725"
	I1019 12:06:03.298317  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.298411  356592 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:06:03.298499  356592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:06:03.298511  356592 addons.go:69] Setting metrics-server=true in profile "addons-042725"
	I1019 12:06:03.298529  356592 addons.go:238] Setting addon metrics-server=true in "addons-042725"
	I1019 12:06:03.298550  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.298703  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.298503  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.301961  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.302850  356592 addons.go:69] Setting ingress=true in profile "addons-042725"
	I1019 12:06:03.302874  356592 addons.go:238] Setting addon ingress=true in "addons-042725"
	I1019 12:06:03.302920  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.303325  356592 addons.go:69] Setting volcano=true in profile "addons-042725"
	I1019 12:06:03.303344  356592 addons.go:238] Setting addon volcano=true in "addons-042725"
	I1019 12:06:03.303353  356592 addons.go:69] Setting default-storageclass=true in profile "addons-042725"
	I1019 12:06:03.303372  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.303378  356592 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-042725"
	I1019 12:06:03.303407  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.303809  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.304209  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.297023  356592 addons.go:238] Setting addon cloud-spanner=true in "addons-042725"
	I1019 12:06:03.304716  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.311995  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.312476  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.351499  356592 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1019 12:06:03.351699  356592 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1019 12:06:03.351497  356592 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1019 12:06:03.353943  356592 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 12:06:03.354016  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1019 12:06:03.354100  356592 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1019 12:06:03.354110  356592 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1019 12:06:03.354166  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.354524  356592 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 12:06:03.354540  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1019 12:06:03.354585  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.354943  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.365602  356592 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1019 12:06:03.366867  356592 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 12:06:03.366893  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1019 12:06:03.366958  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.369815  356592 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1019 12:06:03.369815  356592 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1019 12:06:03.371275  356592 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1019 12:06:03.371298  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1019 12:06:03.371360  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.373090  356592 out.go:179]   - Using image docker.io/registry:3.0.0
	I1019 12:06:03.374297  356592 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1019 12:06:03.374322  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1019 12:06:03.374409  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.395179  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1019 12:06:03.395242  356592 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1019 12:06:03.399843  356592 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-042725"
	I1019 12:06:03.399905  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.400560  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.400980  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1019 12:06:03.402206  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1019 12:06:03.404997  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1019 12:06:03.405348  356592 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 12:06:03.405370  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1019 12:06:03.405457  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.407298  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1019 12:06:03.408408  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	W1019 12:06:03.409415  356592 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1019 12:06:03.409881  356592 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1019 12:06:03.409952  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1019 12:06:03.410012  356592 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1019 12:06:03.411132  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1019 12:06:03.410068  356592 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1019 12:06:03.414577  356592 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:06:03.414620  356592 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1019 12:06:03.414633  356592 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1019 12:06:03.414708  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.411794  356592 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1019 12:06:03.415585  356592 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1019 12:06:03.415661  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.411191  356592 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1019 12:06:03.416757  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1019 12:06:03.418844  356592 addons.go:238] Setting addon default-storageclass=true in "addons-042725"
	I1019 12:06:03.418892  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.419370  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.419566  356592 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1019 12:06:03.419630  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.421986  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.423339  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1019 12:06:03.423358  356592 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1019 12:06:03.423438  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.423786  356592 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:06:03.424161  356592 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:06:03.425887  356592 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:06:03.425907  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:06:03.425962  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.426035  356592 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 12:06:03.426052  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1019 12:06:03.426116  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.433450  356592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:06:03.438999  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.447524  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.453476  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.455659  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.460368  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.467690  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.491037  356592 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1019 12:06:03.493024  356592 out.go:179]   - Using image docker.io/busybox:stable
	I1019 12:06:03.494171  356592 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 12:06:03.494192  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1019 12:06:03.494307  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.494549  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.500626  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.500729  356592 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:06:03.501576  356592 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:06:03.500785  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.501651  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.505077  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.506647  356592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:06:03.512568  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.513224  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.517274  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	W1019 12:06:03.519638  356592 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 12:06:03.519951  356592 retry.go:31] will retry after 316.586718ms: ssh: handshake failed: EOF
	I1019 12:06:03.533327  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.545454  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.629790  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 12:06:03.632833  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1019 12:06:03.647678  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 12:06:03.648672  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 12:06:03.676014  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 12:06:03.688925  356592 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1019 12:06:03.689031  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1019 12:06:03.702225  356592 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1019 12:06:03.702256  356592 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1019 12:06:03.704207  356592 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1019 12:06:03.704283  356592 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1019 12:06:03.710035  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 12:06:03.710247  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1019 12:06:03.710272  356592 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1019 12:06:03.715144  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 12:06:03.716662  356592 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:03.716682  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1019 12:06:03.729680  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:06:03.734708  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:06:03.743469  356592 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1019 12:06:03.743497  356592 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1019 12:06:03.753398  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1019 12:06:03.753444  356592 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1019 12:06:03.761788  356592 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1019 12:06:03.761830  356592 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1019 12:06:03.774651  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:03.789013  356592 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1019 12:06:03.789046  356592 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1019 12:06:03.799785  356592 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 12:06:03.799811  356592 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1019 12:06:03.812733  356592 node_ready.go:35] waiting up to 6m0s for node "addons-042725" to be "Ready" ...
	I1019 12:06:03.813046  356592 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1019 12:06:03.814383  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1019 12:06:03.814470  356592 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1019 12:06:03.823872  356592 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1019 12:06:03.823928  356592 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1019 12:06:03.835956  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 12:06:03.851177  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1019 12:06:03.851221  356592 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1019 12:06:03.868769  356592 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1019 12:06:03.868795  356592 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1019 12:06:03.871658  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1019 12:06:03.871743  356592 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1019 12:06:03.896188  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1019 12:06:03.896216  356592 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1019 12:06:03.949704  356592 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:06:03.949803  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1019 12:06:03.957240  356592 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1019 12:06:03.957263  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1019 12:06:03.967396  356592 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1019 12:06:03.967472  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1019 12:06:04.007979  356592 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1019 12:06:04.008135  356592 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1019 12:06:04.009293  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1019 12:06:04.017213  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:06:04.064727  356592 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1019 12:06:04.064759  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1019 12:06:04.117589  356592 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1019 12:06:04.117617  356592 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1019 12:06:04.122688  356592 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1019 12:06:04.122724  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1019 12:06:04.148921  356592 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 12:06:04.148947  356592 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1019 12:06:04.161054  356592 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1019 12:06:04.161154  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1019 12:06:04.212030  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1019 12:06:04.228336  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 12:06:04.320487  356592 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-042725" context rescaled to 1 replicas
	I1019 12:06:04.901320  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.186134766s)
	I1019 12:06:04.901689  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.171973805s)
	I1019 12:06:04.901746  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.166888747s)
	I1019 12:06:04.901985  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.127299676s)
	W1019 12:06:04.902023  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:04.902049  356592 retry.go:31] will retry after 231.049371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:04.902130  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.066085786s)
	I1019 12:06:04.902154  356592 addons.go:479] Verifying addon metrics-server=true in "addons-042725"
	I1019 12:06:04.903510  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.193366167s)
	I1019 12:06:04.903546  356592 addons.go:479] Verifying addon ingress=true in "addons-042725"
	I1019 12:06:04.904793  356592 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-042725 service yakd-dashboard -n yakd-dashboard
	
	I1019 12:06:04.904861  356592 out.go:179] * Verifying ingress addon...
	I1019 12:06:04.906779  356592 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1019 12:06:04.911510  356592 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	W1019 12:06:04.921387  356592 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I1019 12:06:05.134098  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:05.360009  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.342569009s)
	W1019 12:06:05.360069  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 12:06:05.360085  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.148025281s)
	I1019 12:06:05.360095  356592 retry.go:31] will retry after 167.873129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 12:06:05.360105  356592 addons.go:479] Verifying addon registry=true in "addons-042725"
	I1019 12:06:05.360371  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.131927447s)
	I1019 12:06:05.360416  356592 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-042725"
	I1019 12:06:05.362046  356592 out.go:179] * Verifying csi-hostpath-driver addon...
	I1019 12:06:05.362064  356592 out.go:179] * Verifying registry addon...
	I1019 12:06:05.364225  356592 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1019 12:06:05.364225  356592 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1019 12:06:05.368117  356592 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 12:06:05.368144  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:05.369175  356592 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 12:06:05.369198  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:05.468219  356592 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1019 12:06:05.468240  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:05.528308  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1019 12:06:05.762026  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:05.762065  356592 retry.go:31] will retry after 245.711865ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 12:06:05.815237  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:05.867242  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:05.867335  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:05.909786  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:06.008906  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:06.368395  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:06.368406  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:06.469455  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:06.867355  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:06.867507  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:06.910144  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:07.368102  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:07.368116  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:07.409964  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:07.815684  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:07.867834  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:07.867932  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:07.909915  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:08.021191  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.492828476s)
	I1019 12:06:08.021261  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.012322596s)
	W1019 12:06:08.021293  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:08.021313  356592 retry.go:31] will retry after 515.70648ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:08.367866  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:08.367899  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:08.409632  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:08.538068  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:08.867684  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:08.867722  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:08.909990  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:09.077167  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:09.077199  356592 retry.go:31] will retry after 944.52464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:09.367345  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:09.367501  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:09.410506  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:09.816346  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:09.867858  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:09.867947  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:09.910532  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:10.022556  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:10.367935  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:10.368089  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:10.409558  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:10.562777  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:10.562809  356592 retry.go:31] will retry after 1.228877817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:10.867396  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:10.867500  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:10.910072  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:11.048870  356592 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1019 12:06:11.048959  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:11.067302  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:11.175981  356592 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1019 12:06:11.188344  356592 addons.go:238] Setting addon gcp-auth=true in "addons-042725"
	I1019 12:06:11.188440  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:11.188985  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:11.206940  356592 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1019 12:06:11.207010  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:11.225414  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:11.319803  356592 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:06:11.321061  356592 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1019 12:06:11.322254  356592 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1019 12:06:11.322271  356592 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1019 12:06:11.335526  356592 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1019 12:06:11.335550  356592 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1019 12:06:11.348320  356592 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 12:06:11.348346  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1019 12:06:11.361061  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 12:06:11.368404  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:11.368605  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:11.410601  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:11.665099  356592 addons.go:479] Verifying addon gcp-auth=true in "addons-042725"
	I1019 12:06:11.666540  356592 out.go:179] * Verifying gcp-auth addon...
	I1019 12:06:11.669163  356592 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1019 12:06:11.671534  356592 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1019 12:06:11.671552  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:11.792610  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:11.867751  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:11.867751  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:11.910628  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:12.172493  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:12.315774  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	W1019 12:06:12.328066  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:12.328093  356592 retry.go:31] will retry after 2.459662068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:12.367856  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:12.367997  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:12.409818  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:12.672956  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:12.867032  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:12.867075  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:12.909940  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:13.172797  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:13.367544  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:13.367669  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:13.410849  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:13.672499  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:13.867765  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:13.867801  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:13.910644  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:14.172338  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:14.316062  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:14.367794  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:14.367822  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:14.410467  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:14.672100  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:14.788327  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:14.868204  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:14.868215  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:14.909965  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:15.173455  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:15.323282  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:15.323310  356592 retry.go:31] will retry after 2.538443314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:15.367091  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:15.367237  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:15.409811  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:15.672236  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:15.867591  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:15.867701  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:15.910443  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:16.172151  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:16.367807  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:16.367861  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:16.410485  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:16.672832  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:16.815378  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:16.867752  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:16.867783  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:16.910467  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:17.172243  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:17.367923  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:17.367984  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:17.409965  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:17.672741  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:17.862292  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:17.867285  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:17.867347  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:17.910190  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:18.172710  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:18.367011  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:18.367030  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:06:18.403487  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:18.403517  356592 retry.go:31] will retry after 3.500276456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:18.410311  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:18.672524  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:18.816261  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:18.867784  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:18.867898  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:18.910271  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:19.171886  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:19.367037  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:19.367033  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:19.409905  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:19.672675  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:19.867071  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:19.867082  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:19.909875  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:20.172768  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:20.366943  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:20.367053  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:20.409866  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:20.672990  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:20.867309  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:20.867337  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:20.909843  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:21.172575  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:21.316331  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:21.368113  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:21.368203  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:21.409736  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:21.672573  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:21.867714  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:21.867752  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:21.904925  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:21.909895  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:22.172899  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:22.367128  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:22.367174  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:22.409961  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:22.439035  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:22.439071  356592 retry.go:31] will retry after 8.473188125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:22.671745  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:22.867887  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:22.867974  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:22.909557  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:23.172110  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:23.367321  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:23.367357  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:23.410383  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:23.672278  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:23.816101  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:23.867782  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:23.867965  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:23.910888  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:24.172802  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:24.367014  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:24.367152  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:24.409887  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:24.673054  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:24.867243  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:24.867367  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:24.910131  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:25.172802  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:25.367077  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:25.367184  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:25.409829  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:25.672677  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:25.816308  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:25.867805  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:25.867866  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:25.909380  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:26.172229  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:26.368047  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:26.368047  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:26.409842  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:26.673069  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:26.867067  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:26.867236  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:26.909971  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:27.172661  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:27.367350  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:27.367506  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:27.410565  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:27.672249  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:27.867500  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:27.867552  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:27.910216  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:28.172950  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:28.315654  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:28.367126  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:28.367226  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:28.410015  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:28.673309  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:28.867943  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:28.867947  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:28.910512  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:29.172196  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:29.367729  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:29.367773  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:29.410545  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:29.672334  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:29.867767  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:29.867800  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:29.910399  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:30.172148  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:30.315862  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:30.367589  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:30.367659  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:30.410293  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:30.671956  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:30.867063  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:30.867236  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:30.909749  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:30.912795  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:31.171995  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:31.367772  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:31.367881  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:31.409532  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:31.446556  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:31.446593  356592 retry.go:31] will retry after 14.325800896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:31.672327  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:31.867431  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:31.867544  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:31.909983  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:32.172764  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:32.366965  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:32.366976  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:32.410094  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:32.672873  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:32.815331  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:32.867998  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:32.868073  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:32.909881  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:33.172721  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:33.367880  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:33.367967  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:33.410128  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:33.671851  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:33.867808  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:33.867930  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:33.909717  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:34.172173  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:34.367488  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:34.367540  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:34.410232  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:34.671909  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:34.867983  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:34.868067  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:34.909733  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:35.172549  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:35.316198  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:35.367600  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:35.367595  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:35.410312  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:35.672070  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:35.867128  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:35.867175  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:35.909671  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:36.172469  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:36.367933  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:36.368041  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:36.409738  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:36.672388  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:36.867560  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:36.867696  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:36.910309  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:37.171985  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:37.367740  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:37.367875  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:37.409481  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:37.672377  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:37.816077  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:37.867894  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:37.867968  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:37.910653  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:38.172497  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:38.367072  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:38.367097  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:38.409762  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:38.672582  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:38.867975  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:38.868097  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:38.910089  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:39.173116  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:39.367688  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:39.367864  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:39.410613  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:39.672372  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:39.816144  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:39.867549  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:39.867640  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:39.910588  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:40.172481  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:40.367579  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:40.367689  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:40.410279  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:40.671901  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:40.866826  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:40.866941  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:40.910553  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:41.172117  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:41.367396  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:41.367537  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:41.410111  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:41.672708  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:41.867973  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:41.868069  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:41.909829  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:42.172553  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:42.316052  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:42.368110  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:42.368202  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:42.410079  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:42.672955  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:42.867107  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:42.867146  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:42.909647  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:43.172755  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:43.367118  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:43.367123  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:43.409980  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:43.672620  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:43.867670  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:43.867786  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:43.910548  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:44.172084  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:44.367491  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:44.367574  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:44.410305  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:44.672145  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:44.815240  356592 node_ready.go:49] node "addons-042725" is "Ready"
	I1019 12:06:44.815277  356592 node_ready.go:38] duration metric: took 41.002510053s for node "addons-042725" to be "Ready" ...
	I1019 12:06:44.815295  356592 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:06:44.815349  356592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:06:44.832372  356592 api_server.go:72] duration metric: took 41.537165788s to wait for apiserver process to appear ...
	I1019 12:06:44.832404  356592 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:06:44.832447  356592 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1019 12:06:44.837172  356592 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1019 12:06:44.838075  356592 api_server.go:141] control plane version: v1.34.1
	I1019 12:06:44.838100  356592 api_server.go:131] duration metric: took 5.688895ms to wait for apiserver health ...
	I1019 12:06:44.838108  356592 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:06:44.842666  356592 system_pods.go:59] 20 kube-system pods found
	I1019 12:06:44.842699  356592 system_pods.go:61] "amd-gpu-device-plugin-h5jpt" [6034192f-2361-4c90-bbe0-6e827369a4ac] Pending
	I1019 12:06:44.842713  356592 system_pods.go:61] "coredns-66bc5c9577-8bhw9" [7cd896cd-6595-4cb3-aed2-5e832e989dca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:06:44.842719  356592 system_pods.go:61] "csi-hostpath-attacher-0" [c1a8fe28-1ecc-4175-8398-3e489d9a4d58] Pending
	I1019 12:06:44.842734  356592 system_pods.go:61] "csi-hostpath-resizer-0" [5df401ce-c486-41d1-bfdf-92872a6c9035] Pending
	I1019 12:06:44.842743  356592 system_pods.go:61] "csi-hostpathplugin-vjzh8" [affa99c6-463a-4e94-8d81-bdd935550bef] Pending
	I1019 12:06:44.842748  356592 system_pods.go:61] "etcd-addons-042725" [b2d23439-2167-44bb-ab3a-d14f888fae78] Running
	I1019 12:06:44.842752  356592 system_pods.go:61] "kindnet-jkhpq" [f72f0a67-6931-4d85-862a-38eeef79cdb3] Running
	I1019 12:06:44.842758  356592 system_pods.go:61] "kube-apiserver-addons-042725" [75f261b7-b4eb-464d-b7d8-5828ef37823e] Running
	I1019 12:06:44.842766  356592 system_pods.go:61] "kube-controller-manager-addons-042725" [d7e3d45c-e0c3-4477-9453-95590f9b40da] Running
	I1019 12:06:44.842772  356592 system_pods.go:61] "kube-ingress-dns-minikube" [db10e618-086c-4c0a-960c-df9ac584bc08] Pending
	I1019 12:06:44.842776  356592 system_pods.go:61] "kube-proxy-8swjm" [84b70270-605f-488d-b1ce-6749279e0c6f] Running
	I1019 12:06:44.842781  356592 system_pods.go:61] "kube-scheduler-addons-042725" [280ae2d8-007c-48bd-81fa-54c164113968] Running
	I1019 12:06:44.842792  356592 system_pods.go:61] "metrics-server-85b7d694d7-m56bv" [954b468e-a39d-4596-b8a9-62f10f5aa910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:06:44.842801  356592 system_pods.go:61] "nvidia-device-plugin-daemonset-ddp7p" [08f99b85-a997-45d0-a756-3960d768dc50] Pending
	I1019 12:06:44.842807  356592 system_pods.go:61] "registry-6b586f9694-98h42" [95130b7b-05dc-4919-a9ab-5159f9e85c82] Pending
	I1019 12:06:44.842818  356592 system_pods.go:61] "registry-creds-764b6fb674-rg7vx" [0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:06:44.842823  356592 system_pods.go:61] "registry-proxy-wlzbz" [172ed291-7498-4487-9cd8-04ca84123237] Pending
	I1019 12:06:44.842829  356592 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bzfmt" [aa5555c5-201a-4550-a1bf-71aae1cf0d22] Pending
	I1019 12:06:44.842834  356592 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qthpd" [400d7aa3-8252-438f-90de-e39187b5de7b] Pending
	I1019 12:06:44.842843  356592 system_pods.go:61] "storage-provisioner" [58acddcb-0271-4830-8593-4be76d171679] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:06:44.842853  356592 system_pods.go:74] duration metric: took 4.738957ms to wait for pod list to return data ...
	I1019 12:06:44.842867  356592 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:06:44.844935  356592 default_sa.go:45] found service account: "default"
	I1019 12:06:44.844958  356592 default_sa.go:55] duration metric: took 2.084243ms for default service account to be created ...
	I1019 12:06:44.844969  356592 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:06:44.852085  356592 system_pods.go:86] 20 kube-system pods found
	I1019 12:06:44.852116  356592 system_pods.go:89] "amd-gpu-device-plugin-h5jpt" [6034192f-2361-4c90-bbe0-6e827369a4ac] Pending
	I1019 12:06:44.852128  356592 system_pods.go:89] "coredns-66bc5c9577-8bhw9" [7cd896cd-6595-4cb3-aed2-5e832e989dca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:06:44.852135  356592 system_pods.go:89] "csi-hostpath-attacher-0" [c1a8fe28-1ecc-4175-8398-3e489d9a4d58] Pending
	I1019 12:06:44.852142  356592 system_pods.go:89] "csi-hostpath-resizer-0" [5df401ce-c486-41d1-bfdf-92872a6c9035] Pending
	I1019 12:06:44.852147  356592 system_pods.go:89] "csi-hostpathplugin-vjzh8" [affa99c6-463a-4e94-8d81-bdd935550bef] Pending
	I1019 12:06:44.852151  356592 system_pods.go:89] "etcd-addons-042725" [b2d23439-2167-44bb-ab3a-d14f888fae78] Running
	I1019 12:06:44.852157  356592 system_pods.go:89] "kindnet-jkhpq" [f72f0a67-6931-4d85-862a-38eeef79cdb3] Running
	I1019 12:06:44.852171  356592 system_pods.go:89] "kube-apiserver-addons-042725" [75f261b7-b4eb-464d-b7d8-5828ef37823e] Running
	I1019 12:06:44.852177  356592 system_pods.go:89] "kube-controller-manager-addons-042725" [d7e3d45c-e0c3-4477-9453-95590f9b40da] Running
	I1019 12:06:44.852190  356592 system_pods.go:89] "kube-ingress-dns-minikube" [db10e618-086c-4c0a-960c-df9ac584bc08] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:06:44.852197  356592 system_pods.go:89] "kube-proxy-8swjm" [84b70270-605f-488d-b1ce-6749279e0c6f] Running
	I1019 12:06:44.852204  356592 system_pods.go:89] "kube-scheduler-addons-042725" [280ae2d8-007c-48bd-81fa-54c164113968] Running
	I1019 12:06:44.852215  356592 system_pods.go:89] "metrics-server-85b7d694d7-m56bv" [954b468e-a39d-4596-b8a9-62f10f5aa910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:06:44.852224  356592 system_pods.go:89] "nvidia-device-plugin-daemonset-ddp7p" [08f99b85-a997-45d0-a756-3960d768dc50] Pending
	I1019 12:06:44.852230  356592 system_pods.go:89] "registry-6b586f9694-98h42" [95130b7b-05dc-4919-a9ab-5159f9e85c82] Pending
	I1019 12:06:44.852239  356592 system_pods.go:89] "registry-creds-764b6fb674-rg7vx" [0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:06:44.852247  356592 system_pods.go:89] "registry-proxy-wlzbz" [172ed291-7498-4487-9cd8-04ca84123237] Pending
	I1019 12:06:44.852252  356592 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bzfmt" [aa5555c5-201a-4550-a1bf-71aae1cf0d22] Pending
	I1019 12:06:44.852260  356592 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qthpd" [400d7aa3-8252-438f-90de-e39187b5de7b] Pending
	I1019 12:06:44.852270  356592 system_pods.go:89] "storage-provisioner" [58acddcb-0271-4830-8593-4be76d171679] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:06:44.852293  356592 retry.go:31] will retry after 221.120739ms: missing components: kube-dns
	I1019 12:06:44.866835  356592 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 12:06:44.866865  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:44.866845  356592 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 12:06:44.866884  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:44.910409  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:45.081281  356592 system_pods.go:86] 20 kube-system pods found
	I1019 12:06:45.081315  356592 system_pods.go:89] "amd-gpu-device-plugin-h5jpt" [6034192f-2361-4c90-bbe0-6e827369a4ac] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 12:06:45.081322  356592 system_pods.go:89] "coredns-66bc5c9577-8bhw9" [7cd896cd-6595-4cb3-aed2-5e832e989dca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:06:45.081331  356592 system_pods.go:89] "csi-hostpath-attacher-0" [c1a8fe28-1ecc-4175-8398-3e489d9a4d58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:06:45.081337  356592 system_pods.go:89] "csi-hostpath-resizer-0" [5df401ce-c486-41d1-bfdf-92872a6c9035] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:06:45.081342  356592 system_pods.go:89] "csi-hostpathplugin-vjzh8" [affa99c6-463a-4e94-8d81-bdd935550bef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:06:45.081346  356592 system_pods.go:89] "etcd-addons-042725" [b2d23439-2167-44bb-ab3a-d14f888fae78] Running
	I1019 12:06:45.081350  356592 system_pods.go:89] "kindnet-jkhpq" [f72f0a67-6931-4d85-862a-38eeef79cdb3] Running
	I1019 12:06:45.081354  356592 system_pods.go:89] "kube-apiserver-addons-042725" [75f261b7-b4eb-464d-b7d8-5828ef37823e] Running
	I1019 12:06:45.081357  356592 system_pods.go:89] "kube-controller-manager-addons-042725" [d7e3d45c-e0c3-4477-9453-95590f9b40da] Running
	I1019 12:06:45.081367  356592 system_pods.go:89] "kube-ingress-dns-minikube" [db10e618-086c-4c0a-960c-df9ac584bc08] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:06:45.081372  356592 system_pods.go:89] "kube-proxy-8swjm" [84b70270-605f-488d-b1ce-6749279e0c6f] Running
	I1019 12:06:45.081378  356592 system_pods.go:89] "kube-scheduler-addons-042725" [280ae2d8-007c-48bd-81fa-54c164113968] Running
	I1019 12:06:45.081385  356592 system_pods.go:89] "metrics-server-85b7d694d7-m56bv" [954b468e-a39d-4596-b8a9-62f10f5aa910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:06:45.081392  356592 system_pods.go:89] "nvidia-device-plugin-daemonset-ddp7p" [08f99b85-a997-45d0-a756-3960d768dc50] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:06:45.081399  356592 system_pods.go:89] "registry-6b586f9694-98h42" [95130b7b-05dc-4919-a9ab-5159f9e85c82] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:06:45.081407  356592 system_pods.go:89] "registry-creds-764b6fb674-rg7vx" [0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:06:45.081441  356592 system_pods.go:89] "registry-proxy-wlzbz" [172ed291-7498-4487-9cd8-04ca84123237] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:06:45.081458  356592 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bzfmt" [aa5555c5-201a-4550-a1bf-71aae1cf0d22] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:06:45.081468  356592 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qthpd" [400d7aa3-8252-438f-90de-e39187b5de7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:06:45.081482  356592 system_pods.go:89] "storage-provisioner" [58acddcb-0271-4830-8593-4be76d171679] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:06:45.081500  356592 retry.go:31] will retry after 243.207498ms: missing components: kube-dns
	I1019 12:06:45.181111  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:45.329158  356592 system_pods.go:86] 20 kube-system pods found
	I1019 12:06:45.329193  356592 system_pods.go:89] "amd-gpu-device-plugin-h5jpt" [6034192f-2361-4c90-bbe0-6e827369a4ac] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 12:06:45.329199  356592 system_pods.go:89] "coredns-66bc5c9577-8bhw9" [7cd896cd-6595-4cb3-aed2-5e832e989dca] Running
	I1019 12:06:45.329207  356592 system_pods.go:89] "csi-hostpath-attacher-0" [c1a8fe28-1ecc-4175-8398-3e489d9a4d58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:06:45.329212  356592 system_pods.go:89] "csi-hostpath-resizer-0" [5df401ce-c486-41d1-bfdf-92872a6c9035] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:06:45.329218  356592 system_pods.go:89] "csi-hostpathplugin-vjzh8" [affa99c6-463a-4e94-8d81-bdd935550bef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:06:45.329222  356592 system_pods.go:89] "etcd-addons-042725" [b2d23439-2167-44bb-ab3a-d14f888fae78] Running
	I1019 12:06:45.329225  356592 system_pods.go:89] "kindnet-jkhpq" [f72f0a67-6931-4d85-862a-38eeef79cdb3] Running
	I1019 12:06:45.329229  356592 system_pods.go:89] "kube-apiserver-addons-042725" [75f261b7-b4eb-464d-b7d8-5828ef37823e] Running
	I1019 12:06:45.329232  356592 system_pods.go:89] "kube-controller-manager-addons-042725" [d7e3d45c-e0c3-4477-9453-95590f9b40da] Running
	I1019 12:06:45.329238  356592 system_pods.go:89] "kube-ingress-dns-minikube" [db10e618-086c-4c0a-960c-df9ac584bc08] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:06:45.329241  356592 system_pods.go:89] "kube-proxy-8swjm" [84b70270-605f-488d-b1ce-6749279e0c6f] Running
	I1019 12:06:45.329245  356592 system_pods.go:89] "kube-scheduler-addons-042725" [280ae2d8-007c-48bd-81fa-54c164113968] Running
	I1019 12:06:45.329249  356592 system_pods.go:89] "metrics-server-85b7d694d7-m56bv" [954b468e-a39d-4596-b8a9-62f10f5aa910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:06:45.329258  356592 system_pods.go:89] "nvidia-device-plugin-daemonset-ddp7p" [08f99b85-a997-45d0-a756-3960d768dc50] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:06:45.329263  356592 system_pods.go:89] "registry-6b586f9694-98h42" [95130b7b-05dc-4919-a9ab-5159f9e85c82] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:06:45.329273  356592 system_pods.go:89] "registry-creds-764b6fb674-rg7vx" [0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:06:45.329278  356592 system_pods.go:89] "registry-proxy-wlzbz" [172ed291-7498-4487-9cd8-04ca84123237] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:06:45.329285  356592 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bzfmt" [aa5555c5-201a-4550-a1bf-71aae1cf0d22] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:06:45.329291  356592 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qthpd" [400d7aa3-8252-438f-90de-e39187b5de7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:06:45.329295  356592 system_pods.go:89] "storage-provisioner" [58acddcb-0271-4830-8593-4be76d171679] Running
	I1019 12:06:45.329302  356592 system_pods.go:126] duration metric: took 484.327821ms to wait for k8s-apps to be running ...
	I1019 12:06:45.329312  356592 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:06:45.329353  356592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:06:45.342281  356592 system_svc.go:56] duration metric: took 12.957622ms WaitForService to wait for kubelet
	I1019 12:06:45.342310  356592 kubeadm.go:586] duration metric: took 42.047112038s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:06:45.342330  356592 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:06:45.345028  356592 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:06:45.345059  356592 node_conditions.go:123] node cpu capacity is 8
	I1019 12:06:45.345078  356592 node_conditions.go:105] duration metric: took 2.74248ms to run NodePressure ...
	I1019 12:06:45.345089  356592 start.go:241] waiting for startup goroutines ...
	I1019 12:06:45.368073  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:45.368186  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:45.409816  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:45.673513  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:45.772576  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:45.868572  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:45.868715  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:45.910604  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:46.172277  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:46.369204  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:46.369258  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:46.411583  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:46.511868  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:46.511910  356592 retry.go:31] will retry after 11.467531854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:46.673444  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:46.867924  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:46.868088  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:46.910161  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:47.173132  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:47.368894  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:47.368989  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:47.410235  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:47.672512  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:47.867545  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:47.867941  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:47.911231  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:48.173619  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:48.367844  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:48.367982  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:48.410226  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:48.672609  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:48.867368  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:48.867467  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:48.910519  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:49.172509  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:49.367663  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:49.367734  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:49.410539  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:49.673707  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:49.868558  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:49.870058  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:49.911832  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:50.174079  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:50.368710  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:50.369110  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:50.410695  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:50.673154  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:50.868905  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:50.869013  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:50.910245  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:51.172275  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:51.368010  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:51.368238  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:51.410316  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:51.672212  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:51.867537  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:51.867604  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:51.911076  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:52.173099  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:52.368532  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:52.368566  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:52.411138  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:52.687128  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:52.868588  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:52.868707  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:52.910852  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:53.173261  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:53.368780  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:53.368851  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:53.410085  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:53.673271  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:53.867356  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:53.867530  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:53.910370  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:54.172267  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:54.368255  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:54.368360  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:54.411402  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:54.671930  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:54.868458  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:54.868478  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:54.910031  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:55.172662  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:55.368472  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:55.368671  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:55.411899  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:55.672984  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:55.869004  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:55.869135  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:55.911359  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:56.172107  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:56.368581  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:56.368593  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:56.411168  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:56.672363  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:56.867859  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:56.867932  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:56.910763  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:57.173070  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:57.368525  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:57.368620  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:57.410296  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:57.671905  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:57.868514  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:57.868545  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:57.910487  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:57.979646  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:58.172910  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:58.368650  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:58.368801  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:58.410896  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:58.654949  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:58.654983  356592 retry.go:31] will retry after 25.020490151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:58.673027  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:58.868184  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:58.868369  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:58.910444  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:59.172158  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:59.368159  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:59.368269  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:59.409782  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:59.673369  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:59.867630  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:59.867801  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:59.910773  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:00.172703  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:00.370204  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:00.370235  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:00.409909  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:00.673413  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:00.867969  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:00.868276  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:00.910369  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:01.171702  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:01.368219  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:01.368275  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:01.464591  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:01.672622  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:01.867895  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:01.867915  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:01.914011  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:02.172398  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:02.366965  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:02.367234  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:02.410542  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:02.672036  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:02.867865  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:02.867946  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:02.909434  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:03.172303  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:03.367949  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:03.368031  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:03.409639  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:03.672386  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:03.867642  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:03.867729  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:03.911132  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:04.173272  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:04.371217  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:04.371986  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:04.410531  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:04.672534  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:04.867994  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:04.868178  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:04.911049  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:05.173898  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:05.373838  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:05.374708  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:05.582689  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:05.678267  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:05.867912  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:05.867952  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:05.910762  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:06.172642  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:06.368688  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:06.368804  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:06.411337  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:06.673543  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:06.867650  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:06.867886  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:06.911171  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:07.173043  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:07.367221  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:07.367222  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:07.427491  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:07.672882  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:07.869040  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:07.869112  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:07.911119  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:08.173034  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:08.368886  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:08.373338  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:08.474224  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:08.673464  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:08.867734  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:08.867985  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:08.911098  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:09.173018  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:09.367991  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:09.368244  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:09.410127  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:09.673452  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:09.867738  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:09.867920  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:09.909955  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:10.173448  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:10.367562  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:10.367632  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:10.410607  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:10.672528  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:10.867605  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:10.867637  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:10.910146  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:11.173006  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:11.368078  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:11.368119  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:11.411213  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:11.673366  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:11.867570  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:11.867627  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:11.910182  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:12.172159  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:12.368327  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:12.368392  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:12.410617  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:12.673005  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:12.868165  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:12.868282  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:12.909760  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:13.172581  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:13.368110  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:13.368207  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:13.410177  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:13.673922  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:13.870569  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:13.871144  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:13.910292  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:14.172370  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:14.367770  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:14.367853  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:14.409999  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:14.673143  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:14.868924  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:14.870198  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:14.911304  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:15.173730  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:15.368331  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:15.368390  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:15.411601  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:15.674134  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:15.868506  356592 kapi.go:107] duration metric: took 1m10.504277719s to wait for kubernetes.io/minikube-addons=registry ...
	I1019 12:07:15.868726  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:15.910876  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:16.172818  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:16.367822  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:16.410695  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:16.672650  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:16.868258  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:16.910192  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:17.171797  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:17.368680  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:17.411061  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:17.673404  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:17.867792  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:17.910937  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:18.247769  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:18.376273  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:18.410119  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:18.672012  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:18.868763  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:18.910697  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:19.173298  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:19.369293  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:19.410511  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:19.673640  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:19.876015  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:19.910023  356592 kapi.go:107] duration metric: took 1m15.003240572s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1019 12:07:20.173646  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:20.368297  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:20.671972  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:20.867899  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:21.172391  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:21.367448  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:21.672388  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:21.867276  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:22.172261  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:22.368582  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:22.672932  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:22.868546  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:23.174103  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:23.369105  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:23.671952  356592 kapi.go:107] duration metric: took 1m12.002788104s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1019 12:07:23.673648  356592 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-042725 cluster.
	I1019 12:07:23.674875  356592 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1019 12:07:23.675970  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:07:23.677810  356592 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1019 12:07:23.867788  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:07:24.296115  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:24.296151  356592 retry.go:31] will retry after 35.866657781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:24.368453  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:24.868140  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:25.375783  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:25.868377  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:26.368216  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:26.868896  356592 kapi.go:107] duration metric: took 1m21.504666416s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1019 12:08:00.164742  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1019 12:08:00.691743  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 12:08:00.691876  356592 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1019 12:08:00.693835  356592 out.go:179] * Enabled addons: registry-creds, cloud-spanner, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1019 12:08:00.695026  356592 addons.go:514] duration metric: took 1m57.399772156s for enable addons: enabled=[registry-creds cloud-spanner nvidia-device-plugin amd-gpu-device-plugin ingress-dns storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1019 12:08:00.695078  356592 start.go:246] waiting for cluster config update ...
	I1019 12:08:00.695103  356592 start.go:255] writing updated cluster config ...
	I1019 12:08:00.695459  356592 ssh_runner.go:195] Run: rm -f paused
	I1019 12:08:00.699243  356592 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:08:00.702784  356592 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8bhw9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:00.706944  356592 pod_ready.go:94] pod "coredns-66bc5c9577-8bhw9" is "Ready"
	I1019 12:08:00.706964  356592 pod_ready.go:86] duration metric: took 4.159338ms for pod "coredns-66bc5c9577-8bhw9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:00.708927  356592 pod_ready.go:83] waiting for pod "etcd-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:00.712273  356592 pod_ready.go:94] pod "etcd-addons-042725" is "Ready"
	I1019 12:08:00.712290  356592 pod_ready.go:86] duration metric: took 3.34608ms for pod "etcd-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:00.713954  356592 pod_ready.go:83] waiting for pod "kube-apiserver-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:00.717176  356592 pod_ready.go:94] pod "kube-apiserver-addons-042725" is "Ready"
	I1019 12:08:00.717197  356592 pod_ready.go:86] duration metric: took 3.224376ms for pod "kube-apiserver-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:00.718917  356592 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:01.102888  356592 pod_ready.go:94] pod "kube-controller-manager-addons-042725" is "Ready"
	I1019 12:08:01.102915  356592 pod_ready.go:86] duration metric: took 383.979363ms for pod "kube-controller-manager-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:01.303289  356592 pod_ready.go:83] waiting for pod "kube-proxy-8swjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:01.703118  356592 pod_ready.go:94] pod "kube-proxy-8swjm" is "Ready"
	I1019 12:08:01.703143  356592 pod_ready.go:86] duration metric: took 399.824693ms for pod "kube-proxy-8swjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:01.903371  356592 pod_ready.go:83] waiting for pod "kube-scheduler-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:02.302840  356592 pod_ready.go:94] pod "kube-scheduler-addons-042725" is "Ready"
	I1019 12:08:02.302869  356592 pod_ready.go:86] duration metric: took 399.467884ms for pod "kube-scheduler-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:02.302887  356592 pod_ready.go:40] duration metric: took 1.603615654s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:08:02.347940  356592 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:08:02.350311  356592 out.go:179] * Done! kubectl is now configured to use "addons-042725" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:09:03 addons-042725 crio[772]: time="2025-10-19T12:09:03.428187418Z" level=info msg="Stopped container 3e13e3b3051ae4eaddebf4804b9e651178593e2356d7b7b89ba4b43cdfb83dc6: default/task-pv-pod-restore/task-pv-container" id=686870bb-b8b3-41fb-afc8-d1c0816f9014 name=/runtime.v1.RuntimeService/StopContainer
	Oct 19 12:09:03 addons-042725 crio[772]: time="2025-10-19T12:09:03.428785372Z" level=info msg="Stopping pod sandbox: 62f3b72772ce1f40272b6dde98f1b2e1b08b4bb0cf8b32def82c0f5cc0b1b5c0" id=d02e690d-d669-405a-b551-609afe6c71c0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 12:09:03 addons-042725 crio[772]: time="2025-10-19T12:09:03.429063661Z" level=info msg="Got pod network &{Name:task-pv-pod-restore Namespace:default ID:62f3b72772ce1f40272b6dde98f1b2e1b08b4bb0cf8b32def82c0f5cc0b1b5c0 UID:667beea4-e25c-42b2-9062-6213236ce3cc NetNS:/var/run/netns/9bba4980-d265-4169-9004-b2b7c498b695 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006141c0}] Aliases:map[]}"
	Oct 19 12:09:03 addons-042725 crio[772]: time="2025-10-19T12:09:03.429233494Z" level=info msg="Deleting pod default_task-pv-pod-restore from CNI network \"kindnet\" (type=ptp)"
	Oct 19 12:09:03 addons-042725 crio[772]: time="2025-10-19T12:09:03.452400696Z" level=info msg="Stopped pod sandbox: 62f3b72772ce1f40272b6dde98f1b2e1b08b4bb0cf8b32def82c0f5cc0b1b5c0" id=d02e690d-d669-405a-b551-609afe6c71c0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 12:09:03 addons-042725 crio[772]: time="2025-10-19T12:09:03.887179062Z" level=info msg="Removing container: 3e13e3b3051ae4eaddebf4804b9e651178593e2356d7b7b89ba4b43cdfb83dc6" id=906bef4d-d5b4-4707-a089-d16b87772d31 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:09:03 addons-042725 crio[772]: time="2025-10-19T12:09:03.896016733Z" level=info msg="Removed container 3e13e3b3051ae4eaddebf4804b9e651178593e2356d7b7b89ba4b43cdfb83dc6: default/task-pv-pod-restore/task-pv-container" id=906bef4d-d5b4-4707-a089-d16b87772d31 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:09:58 addons-042725 crio[772]: time="2025-10-19T12:09:58.232157717Z" level=info msg="Stopping pod sandbox: 62f3b72772ce1f40272b6dde98f1b2e1b08b4bb0cf8b32def82c0f5cc0b1b5c0" id=6aaecfa5-1a1a-4fb4-8149-7f21596a0d7b name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 12:09:58 addons-042725 crio[772]: time="2025-10-19T12:09:58.232228986Z" level=info msg="Stopped pod sandbox (already stopped): 62f3b72772ce1f40272b6dde98f1b2e1b08b4bb0cf8b32def82c0f5cc0b1b5c0" id=6aaecfa5-1a1a-4fb4-8149-7f21596a0d7b name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 12:09:58 addons-042725 crio[772]: time="2025-10-19T12:09:58.232585915Z" level=info msg="Removing pod sandbox: 62f3b72772ce1f40272b6dde98f1b2e1b08b4bb0cf8b32def82c0f5cc0b1b5c0" id=962220f7-79b2-4150-b6e7-c54dbc615733 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 12:09:58 addons-042725 crio[772]: time="2025-10-19T12:09:58.235848922Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:09:58 addons-042725 crio[772]: time="2025-10-19T12:09:58.235906446Z" level=info msg="Removed pod sandbox: 62f3b72772ce1f40272b6dde98f1b2e1b08b4bb0cf8b32def82c0f5cc0b1b5c0" id=962220f7-79b2-4150-b6e7-c54dbc615733 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.418513618Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-s62qc/POD" id=90b1d744-39b2-4d72-bbb6-ad23fc0e2a6f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.418639193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.427527075Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-s62qc Namespace:default ID:b333ea58c0cd66db0736db88457d1c72928b2f5794b9d2fc6a69b51d45529d6c UID:e4aab227-4608-4b14-9214-eee98aed73b6 NetNS:/var/run/netns/523e8f65-f171-4a42-84a6-aa47b9805dcc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad78}] Aliases:map[]}"
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.427567856Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-s62qc to CNI network \"kindnet\" (type=ptp)"
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.438506449Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-s62qc Namespace:default ID:b333ea58c0cd66db0736db88457d1c72928b2f5794b9d2fc6a69b51d45529d6c UID:e4aab227-4608-4b14-9214-eee98aed73b6 NetNS:/var/run/netns/523e8f65-f171-4a42-84a6-aa47b9805dcc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ad78}] Aliases:map[]}"
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.43867644Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-s62qc for CNI network kindnet (type=ptp)"
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.439630237Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.440548779Z" level=info msg="Ran pod sandbox b333ea58c0cd66db0736db88457d1c72928b2f5794b9d2fc6a69b51d45529d6c with infra container: default/hello-world-app-5d498dc89-s62qc/POD" id=90b1d744-39b2-4d72-bbb6-ad23fc0e2a6f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.441791869Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2691f53b-36d5-46ea-af4e-a250c584f0e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.441910114Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=2691f53b-36d5-46ea-af4e-a250c584f0e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.441942292Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=2691f53b-36d5-46ea-af4e-a250c584f0e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.442566402Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=54aa7b44-9ad1-43e5-9d87-760bce893922 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:10:45 addons-042725 crio[772]: time="2025-10-19T12:10:45.459636981Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	9c6bafa828a57       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   fdf40ac1e5e26       registry-creds-764b6fb674-rg7vx             kube-system
	44ac272447adb       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago        Running             nginx                                    0                   8fc93ef80f7f4       nginx                                       default
	84e8f30032010       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   34b6bbac60f50       busybox                                     default
	3ade97065f11c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago        Running             csi-snapshotter                          0                   ce41d26d2c03f       csi-hostpathplugin-vjzh8                    kube-system
	fb7af3710e740       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago        Running             csi-provisioner                          0                   ce41d26d2c03f       csi-hostpathplugin-vjzh8                    kube-system
	a97ff90dab8de       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago        Running             liveness-probe                           0                   ce41d26d2c03f       csi-hostpathplugin-vjzh8                    kube-system
	2a1f70eb7742e       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   ce41d26d2c03f       csi-hostpathplugin-vjzh8                    kube-system
	bcf2e921d1fbc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   4ba69035e1c41       gcp-auth-78565c9fb4-vcs5x                   gcp-auth
	48fc5eed7d5dd       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   ce41d26d2c03f       csi-hostpathplugin-vjzh8                    kube-system
	e6e0bbb22679d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago        Running             gadget                                   0                   964d5efac1d49       gadget-tfffr                                gadget
	75028df70de03       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago        Running             controller                               0                   39ee0b03cf8da       ingress-nginx-controller-675c5ddd98-jgc9g   ingress-nginx
	c01ae707db89e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   9250fa8992155       registry-proxy-wlzbz                        kube-system
	ffff44fc42fb1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   ce41d26d2c03f       csi-hostpathplugin-vjzh8                    kube-system
	00707c3c4bab5       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   181810b6bbe83       amd-gpu-device-plugin-h5jpt                 kube-system
	7e3eb26fc0ee1       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   902e355b125f7       nvidia-device-plugin-daemonset-ddp7p        kube-system
	1be6499ceead7       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   8a84d25454c41       snapshot-controller-7d9fbc56b8-qthpd        kube-system
	286cb01381b0e       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   f153b7650888c       csi-hostpath-attacher-0                     kube-system
	e74d01dfb7b1e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   7e5bce28fc5ab       csi-hostpath-resizer-0                      kube-system
	fbeda2203e379       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              patch                                    0                   c032311978789       ingress-nginx-admission-patch-92jcm         ingress-nginx
	15f3c32c2c116       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   7f9fa23f57690       snapshot-controller-7d9fbc56b8-bzfmt        kube-system
	b4cbe25106ffb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              create                                   0                   1f0b63d9c52f0       ingress-nginx-admission-create-p6q55        ingress-nginx
	fde2b1c07a1da       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   df43b2dff1f06       registry-6b586f9694-98h42                   kube-system
	069419553a5ee       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   5f8e570a7d824       yakd-dashboard-5ff678cb9-8kxtn              yakd-dashboard
	0f9b8df5b59c4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   86ea4623eebec       local-path-provisioner-648f6765c9-4xrmm     local-path-storage
	2f814989d8185       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   5529ff02db0a9       kube-ingress-dns-minikube                   kube-system
	3b868a98638bd       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   c27b011129b0d       metrics-server-85b7d694d7-m56bv             kube-system
	2eb361a243fb3       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago        Running             cloud-spanner-emulator                   0                   d420fc45ea86e       cloud-spanner-emulator-86bd5cbb97-blgzl     default
	1089a2c2700f2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago        Running             coredns                                  0                   7d03bf28ac9fd       coredns-66bc5c9577-8bhw9                    kube-system
	7a4e144a7b1ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago        Running             storage-provisioner                      0                   9903e5a1a73d2       storage-provisioner                         kube-system
	392500e9aeeb9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   47286d4dc497e       kube-proxy-8swjm                            kube-system
	cde6c4794a9e2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   cbc7045394aed       kindnet-jkhpq                               kube-system
	396948a693fd8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   724f045c92738       kube-controller-manager-addons-042725       kube-system
	09349ccfaf4c0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   a2aadcc280058       kube-apiserver-addons-042725                kube-system
	ae636ce017962       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   0b3da88054ae6       kube-scheduler-addons-042725                kube-system
	0d69b9d0659dd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   3048df84bc61e       etcd-addons-042725                          kube-system
	
	
	==> coredns [1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0] <==
	[INFO] 10.244.0.22:55443 - 58125 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005644514s
	[INFO] 10.244.0.22:39411 - 4710 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005182971s
	[INFO] 10.244.0.22:43205 - 48734 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006065197s
	[INFO] 10.244.0.22:45996 - 47057 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005126078s
	[INFO] 10.244.0.22:42110 - 9949 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005688927s
	[INFO] 10.244.0.22:44348 - 22224 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00205042s
	[INFO] 10.244.0.22:41947 - 38479 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002122304s
	[INFO] 10.244.0.26:36036 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000241848s
	[INFO] 10.244.0.26:37445 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000188468s
	[INFO] 10.244.0.31:43906 - 8855 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000245692s
	[INFO] 10.244.0.31:60652 - 61800 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000339292s
	[INFO] 10.244.0.31:39915 - 34069 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000115494s
	[INFO] 10.244.0.31:44071 - 18986 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000161442s
	[INFO] 10.244.0.31:40866 - 46165 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00011704s
	[INFO] 10.244.0.31:37230 - 4072 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000146852s
	[INFO] 10.244.0.31:36702 - 14753 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003210731s
	[INFO] 10.244.0.31:32904 - 59487 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003336518s
	[INFO] 10.244.0.31:55847 - 34174 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.00500913s
	[INFO] 10.244.0.31:42712 - 19710 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.005113422s
	[INFO] 10.244.0.31:41348 - 55417 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005007706s
	[INFO] 10.244.0.31:47114 - 34016 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.00794878s
	[INFO] 10.244.0.31:46373 - 26589 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004784209s
	[INFO] 10.244.0.31:48276 - 39160 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005265976s
	[INFO] 10.244.0.31:39097 - 15268 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.00168059s
	[INFO] 10.244.0.31:48565 - 45057 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001769259s
	
	
	==> describe nodes <==
	Name:               addons-042725
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-042725
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=addons-042725
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_05_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-042725
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-042725"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:05:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-042725
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:10:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:09:32 +0000   Sun, 19 Oct 2025 12:05:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:09:32 +0000   Sun, 19 Oct 2025 12:05:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:09:32 +0000   Sun, 19 Oct 2025 12:05:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:09:32 +0000   Sun, 19 Oct 2025 12:06:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-042725
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                44f09d92-1e2d-487d-b4c4-92e6e5b92b49
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  default                     cloud-spanner-emulator-86bd5cbb97-blgzl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  default                     hello-world-app-5d498dc89-s62qc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-tfffr                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  gcp-auth                    gcp-auth-78565c9fb4-vcs5x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-jgc9g    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m42s
	  kube-system                 amd-gpu-device-plugin-h5jpt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 coredns-66bc5c9577-8bhw9                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m43s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 csi-hostpathplugin-vjzh8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 etcd-addons-042725                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m50s
	  kube-system                 kindnet-jkhpq                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m43s
	  kube-system                 kube-apiserver-addons-042725                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-controller-manager-addons-042725        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-proxy-8swjm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-scheduler-addons-042725                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 metrics-server-85b7d694d7-m56bv              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m42s
	  kube-system                 nvidia-device-plugin-daemonset-ddp7p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 registry-6b586f9694-98h42                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 registry-creds-764b6fb674-rg7vx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 registry-proxy-wlzbz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-bzfmt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 snapshot-controller-7d9fbc56b8-qthpd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  local-path-storage          local-path-provisioner-648f6765c9-4xrmm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8kxtn               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m41s  kube-proxy       
	  Normal  Starting                 4m48s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m48s  kubelet          Node addons-042725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m48s  kubelet          Node addons-042725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m48s  kubelet          Node addons-042725 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m44s  node-controller  Node addons-042725 event: Registered Node addons-042725 in Controller
	  Normal  NodeReady                4m2s   kubelet          Node addons-042725 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 31 d3 aa 8a bd 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 bc e1 50 25 8b 08 06
	[Oct19 12:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.045444] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023837] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +2.047737] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +8.512033] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[Oct19 12:09] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[ +32.252549] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	
	
	==> etcd [0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e] <==
	{"level":"warn","ts":"2025-10-19T12:05:55.212611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.219745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.226406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.234473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.241971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.257090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.263180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.269699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.315641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:06:05.776481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:06:05.782531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:06:32.723483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:06:32.729951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:06:32.744209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:06:32.751030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:07:05.580914Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.612483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T12:07:05.581028Z","caller":"traceutil/trace.go:172","msg":"trace[1143347139] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"171.748594ms","start":"2025-10-19T12:07:05.409262Z","end":"2025-10-19T12:07:05.581010Z","steps":["trace[1143347139] 'range keys from in-memory index tree'  (duration: 171.536437ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:07:05.581068Z","caller":"traceutil/trace.go:172","msg":"trace[258740765] transaction","detail":"{read_only:false; response_revision:1070; number_of_response:1; }","duration":"139.48538ms","start":"2025-10-19T12:07:05.441566Z","end":"2025-10-19T12:07:05.581052Z","steps":["trace[258740765] 'process raft request'  (duration: 85.921607ms)","trace[258740765] 'compare'  (duration: 53.247314ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T12:07:05.581207Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"199.889867ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-cjpqf\" limit:1 ","response":"range_response_count:1 size:4256"}
	{"level":"info","ts":"2025-10-19T12:07:05.581889Z","caller":"traceutil/trace.go:172","msg":"trace[605640462] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-cjpqf; range_end:; response_count:1; response_revision:1069; }","duration":"200.573383ms","start":"2025-10-19T12:07:05.381298Z","end":"2025-10-19T12:07:05.581871Z","steps":["trace[605640462] 'range keys from in-memory index tree'  (duration: 199.562079ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:07:05.615028Z","caller":"traceutil/trace.go:172","msg":"trace[2127932943] transaction","detail":"{read_only:false; response_revision:1071; number_of_response:1; }","duration":"158.787077ms","start":"2025-10-19T12:07:05.456221Z","end":"2025-10-19T12:07:05.615008Z","steps":["trace[2127932943] 'process raft request'  (duration: 158.668886ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:07:05.801019Z","caller":"traceutil/trace.go:172","msg":"trace[82180297] transaction","detail":"{read_only:false; response_revision:1075; number_of_response:1; }","duration":"118.151801ms","start":"2025-10-19T12:07:05.682849Z","end":"2025-10-19T12:07:05.801001Z","steps":["trace[82180297] 'process raft request'  (duration: 118.101294ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:07:05.801157Z","caller":"traceutil/trace.go:172","msg":"trace[448752863] transaction","detail":"{read_only:false; response_revision:1074; number_of_response:1; }","duration":"160.420613ms","start":"2025-10-19T12:07:05.640727Z","end":"2025-10-19T12:07:05.801147Z","steps":["trace[448752863] 'process raft request'  (duration: 78.170677ms)","trace[448752863] 'compare'  (duration: 81.900574ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T12:07:07.361013Z","caller":"traceutil/trace.go:172","msg":"trace[996103451] transaction","detail":"{read_only:false; response_revision:1079; number_of_response:1; }","duration":"155.396919ms","start":"2025-10-19T12:07:07.205597Z","end":"2025-10-19T12:07:07.360993Z","steps":["trace[996103451] 'process raft request'  (duration: 155.27591ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:07:18.246372Z","caller":"traceutil/trace.go:172","msg":"trace[790859880] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"186.670945ms","start":"2025-10-19T12:07:18.059681Z","end":"2025-10-19T12:07:18.246352Z","steps":["trace[790859880] 'process raft request'  (duration: 124.603812ms)","trace[790859880] 'compare'  (duration: 61.893846ms)"],"step_count":2}
	
	
	==> gcp-auth [bcf2e921d1fbc57c1dc8f9141610578d1ee199190d26ea348e19f74933486229] <==
	2025/10/19 12:07:22 GCP Auth Webhook started!
	2025/10/19 12:08:02 Ready to marshal response ...
	2025/10/19 12:08:02 Ready to write response ...
	2025/10/19 12:08:02 Ready to marshal response ...
	2025/10/19 12:08:02 Ready to write response ...
	2025/10/19 12:08:02 Ready to marshal response ...
	2025/10/19 12:08:02 Ready to write response ...
	2025/10/19 12:08:13 Ready to marshal response ...
	2025/10/19 12:08:13 Ready to write response ...
	2025/10/19 12:08:13 Ready to marshal response ...
	2025/10/19 12:08:13 Ready to write response ...
	2025/10/19 12:08:20 Ready to marshal response ...
	2025/10/19 12:08:20 Ready to write response ...
	2025/10/19 12:08:20 Ready to marshal response ...
	2025/10/19 12:08:20 Ready to write response ...
	2025/10/19 12:08:21 Ready to marshal response ...
	2025/10/19 12:08:21 Ready to write response ...
	2025/10/19 12:08:31 Ready to marshal response ...
	2025/10/19 12:08:31 Ready to write response ...
	2025/10/19 12:08:56 Ready to marshal response ...
	2025/10/19 12:08:56 Ready to write response ...
	2025/10/19 12:10:45 Ready to marshal response ...
	2025/10/19 12:10:45 Ready to write response ...
	
	
	==> kernel <==
	 12:10:46 up  1:53,  0 user,  load average: 0.25, 1.19, 1.67
	Linux addons-042725 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1] <==
	I1019 12:08:44.584045       1 main.go:301] handling current node
	I1019 12:08:54.590023       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:08:54.590057       1 main.go:301] handling current node
	I1019 12:09:04.583641       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:09:04.583675       1 main.go:301] handling current node
	I1019 12:09:14.583513       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:09:14.583546       1 main.go:301] handling current node
	I1019 12:09:24.584498       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:09:24.584529       1 main.go:301] handling current node
	I1019 12:09:34.584505       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:09:34.584554       1 main.go:301] handling current node
	I1019 12:09:44.584845       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:09:44.584887       1 main.go:301] handling current node
	I1019 12:09:54.591933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:09:54.591964       1 main.go:301] handling current node
	I1019 12:10:04.583507       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:10:04.583543       1 main.go:301] handling current node
	I1019 12:10:14.587617       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:10:14.587652       1 main.go:301] handling current node
	I1019 12:10:24.591729       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:10:24.591771       1 main.go:301] handling current node
	I1019 12:10:34.588486       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:10:34.588520       1 main.go:301] handling current node
	I1019 12:10:44.590403       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:10:44.590457       1 main.go:301] handling current node
	
	
	==> kube-apiserver [09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1019 12:06:50.690828       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.154.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.154.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.154.125:443: connect: connection refused" logger="UnhandledError"
	W1019 12:06:51.692147       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:06:51.692189       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1019 12:06:51.692201       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1019 12:06:51.692158       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:06:51.692270       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1019 12:06:51.693390       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1019 12:06:55.701047       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:06:55.701077       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.154.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.154.125:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	E1019 12:06:55.701169       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1019 12:06:55.718256       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1019 12:08:09.973499       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48254: use of closed network connection
	E1019 12:08:10.120394       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48288: use of closed network connection
	I1019 12:08:21.114637       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1019 12:08:21.298316       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.203.205"}
	I1019 12:08:41.014354       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1019 12:10:45.182550       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.186.160"}
	
	
	==> kube-controller-manager [396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554] <==
	I1019 12:06:02.708322       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 12:06:02.708368       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 12:06:02.708596       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 12:06:02.709368       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 12:06:02.709442       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 12:06:02.709442       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:06:02.709451       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:06:02.709473       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 12:06:02.709524       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 12:06:02.709525       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 12:06:02.711721       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 12:06:02.712915       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:06:02.712915       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:06:02.717166       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:06:02.717179       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 12:06:02.723388       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 12:06:02.727647       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1019 12:06:32.717735       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 12:06:32.717877       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1019 12:06:32.717926       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1019 12:06:32.734370       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1019 12:06:32.738130       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1019 12:06:32.818801       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:06:32.839013       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:06:47.712982       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963] <==
	I1019 12:06:04.167664       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:06:04.486778       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:06:04.589534       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:06:04.589576       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 12:06:04.589656       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:06:04.699861       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:06:04.699933       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:06:04.707709       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:06:04.708895       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:06:04.709236       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:06:04.710831       1 config.go:309] "Starting node config controller"
	I1019 12:06:04.710892       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:06:04.711276       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:06:04.713655       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:06:04.711370       1 config.go:200] "Starting service config controller"
	I1019 12:06:04.715700       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:06:04.711454       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:06:04.715964       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:06:04.811187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:06:04.816532       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:06:04.817594       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:06:04.818762       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351] <==
	I1019 12:05:55.879920       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:05:55.880891       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 12:05:55.881082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 12:05:55.881237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:05:55.881453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:05:55.882413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:05:55.882466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:05:55.882625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:05:55.882660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:05:55.882754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:05:55.882834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:05:55.882890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:05:55.882892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:05:55.883187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:05:55.883222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:05:55.883282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:05:55.883378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:05:55.883706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 12:05:55.883776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:05:55.883917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:05:55.883987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:05:56.688486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:05:56.690481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 12:05:56.842963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1019 12:05:59.680577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:08:58 addons-042725 kubelet[1274]: I1019 12:08:58.195784    1274 scope.go:117] "RemoveContainer" containerID="c961716109c9b518228888909876a69f31ade35a73d765952503883cc1038ea3"
	Oct 19 12:09:00 addons-042725 kubelet[1274]: I1019 12:09:00.885511    1274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-rg7vx" podStartSLOduration=176.814998756 podStartE2EDuration="2m57.885487467s" podCreationTimestamp="2025-10-19 12:06:03 +0000 UTC" firstStartedPulling="2025-10-19 12:08:59.177046524 +0000 UTC m=+181.103482782" lastFinishedPulling="2025-10-19 12:09:00.247535246 +0000 UTC m=+182.173971493" observedRunningTime="2025-10-19 12:09:00.88505178 +0000 UTC m=+182.811488067" watchObservedRunningTime="2025-10-19 12:09:00.885487467 +0000 UTC m=+182.811923732"
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.594139    1274 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lqcj\" (UniqueName: \"kubernetes.io/projected/667beea4-e25c-42b2-9062-6213236ce3cc-kube-api-access-9lqcj\") pod \"667beea4-e25c-42b2-9062-6213236ce3cc\" (UID: \"667beea4-e25c-42b2-9062-6213236ce3cc\") "
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.594290    1274 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^62ee0fc7-ace4-11f0-ab4a-0ef6929ab6de\") pod \"667beea4-e25c-42b2-9062-6213236ce3cc\" (UID: \"667beea4-e25c-42b2-9062-6213236ce3cc\") "
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.594308    1274 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/667beea4-e25c-42b2-9062-6213236ce3cc-gcp-creds\") pod \"667beea4-e25c-42b2-9062-6213236ce3cc\" (UID: \"667beea4-e25c-42b2-9062-6213236ce3cc\") "
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.594471    1274 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/667beea4-e25c-42b2-9062-6213236ce3cc-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "667beea4-e25c-42b2-9062-6213236ce3cc" (UID: "667beea4-e25c-42b2-9062-6213236ce3cc"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.596385    1274 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667beea4-e25c-42b2-9062-6213236ce3cc-kube-api-access-9lqcj" (OuterVolumeSpecName: "kube-api-access-9lqcj") pod "667beea4-e25c-42b2-9062-6213236ce3cc" (UID: "667beea4-e25c-42b2-9062-6213236ce3cc"). InnerVolumeSpecName "kube-api-access-9lqcj". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.597477    1274 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^62ee0fc7-ace4-11f0-ab4a-0ef6929ab6de" (OuterVolumeSpecName: "task-pv-storage") pod "667beea4-e25c-42b2-9062-6213236ce3cc" (UID: "667beea4-e25c-42b2-9062-6213236ce3cc"). InnerVolumeSpecName "pvc-81450524-97e5-4005-9f7a-a42c02f532aa". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.695571    1274 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9lqcj\" (UniqueName: \"kubernetes.io/projected/667beea4-e25c-42b2-9062-6213236ce3cc-kube-api-access-9lqcj\") on node \"addons-042725\" DevicePath \"\""
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.695636    1274 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-81450524-97e5-4005-9f7a-a42c02f532aa\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^62ee0fc7-ace4-11f0-ab4a-0ef6929ab6de\") on node \"addons-042725\" "
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.695648    1274 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/667beea4-e25c-42b2-9062-6213236ce3cc-gcp-creds\") on node \"addons-042725\" DevicePath \"\""
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.700024    1274 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-81450524-97e5-4005-9f7a-a42c02f532aa" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^62ee0fc7-ace4-11f0-ab4a-0ef6929ab6de") on node "addons-042725"
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.796261    1274 reconciler_common.go:299] "Volume detached for volume \"pvc-81450524-97e5-4005-9f7a-a42c02f532aa\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^62ee0fc7-ace4-11f0-ab4a-0ef6929ab6de\") on node \"addons-042725\" DevicePath \"\""
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.885925    1274 scope.go:117] "RemoveContainer" containerID="3e13e3b3051ae4eaddebf4804b9e651178593e2356d7b7b89ba4b43cdfb83dc6"
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.896278    1274 scope.go:117] "RemoveContainer" containerID="3e13e3b3051ae4eaddebf4804b9e651178593e2356d7b7b89ba4b43cdfb83dc6"
	Oct 19 12:09:03 addons-042725 kubelet[1274]: E1019 12:09:03.896674    1274 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e13e3b3051ae4eaddebf4804b9e651178593e2356d7b7b89ba4b43cdfb83dc6\": container with ID starting with 3e13e3b3051ae4eaddebf4804b9e651178593e2356d7b7b89ba4b43cdfb83dc6 not found: ID does not exist" containerID="3e13e3b3051ae4eaddebf4804b9e651178593e2356d7b7b89ba4b43cdfb83dc6"
	Oct 19 12:09:03 addons-042725 kubelet[1274]: I1019 12:09:03.896713    1274 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e13e3b3051ae4eaddebf4804b9e651178593e2356d7b7b89ba4b43cdfb83dc6"} err="failed to get container status \"3e13e3b3051ae4eaddebf4804b9e651178593e2356d7b7b89ba4b43cdfb83dc6\": rpc error: code = NotFound desc = could not find container \"3e13e3b3051ae4eaddebf4804b9e651178593e2356d7b7b89ba4b43cdfb83dc6\": container with ID starting with 3e13e3b3051ae4eaddebf4804b9e651178593e2356d7b7b89ba4b43cdfb83dc6 not found: ID does not exist"
	Oct 19 12:09:04 addons-042725 kubelet[1274]: I1019 12:09:04.158506    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="667beea4-e25c-42b2-9062-6213236ce3cc" path="/var/lib/kubelet/pods/667beea4-e25c-42b2-9062-6213236ce3cc/volumes"
	Oct 19 12:09:23 addons-042725 kubelet[1274]: I1019 12:09:23.154965    1274 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-ddp7p" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:09:46 addons-042725 kubelet[1274]: I1019 12:09:46.155202    1274 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wlzbz" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:09:51 addons-042725 kubelet[1274]: I1019 12:09:51.154920    1274 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-98h42" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:10:00 addons-042725 kubelet[1274]: I1019 12:10:00.154624    1274 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-h5jpt" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:10:31 addons-042725 kubelet[1274]: I1019 12:10:31.154957    1274 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-ddp7p" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:10:45 addons-042725 kubelet[1274]: I1019 12:10:45.168237    1274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e4aab227-4608-4b14-9214-eee98aed73b6-gcp-creds\") pod \"hello-world-app-5d498dc89-s62qc\" (UID: \"e4aab227-4608-4b14-9214-eee98aed73b6\") " pod="default/hello-world-app-5d498dc89-s62qc"
	Oct 19 12:10:45 addons-042725 kubelet[1274]: I1019 12:10:45.168305    1274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqxv7\" (UniqueName: \"kubernetes.io/projected/e4aab227-4608-4b14-9214-eee98aed73b6-kube-api-access-zqxv7\") pod \"hello-world-app-5d498dc89-s62qc\" (UID: \"e4aab227-4608-4b14-9214-eee98aed73b6\") " pod="default/hello-world-app-5d498dc89-s62qc"
	
	
	==> storage-provisioner [7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe] <==
	W1019 12:10:22.183243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:24.186247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:24.190200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:26.193238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:26.197957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:28.200775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:28.204415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:30.207132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:30.210839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:32.213730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:32.217821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:34.220814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:34.224739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:36.228166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:36.232116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:38.235076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:38.240273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:40.243778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:40.247625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:42.250524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:42.254195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:44.257190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:44.262386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:46.265706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:10:46.270931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-042725 -n addons-042725
helpers_test.go:269: (dbg) Run:  kubectl --context addons-042725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-p6q55 ingress-nginx-admission-patch-92jcm
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-042725 describe pod ingress-nginx-admission-create-p6q55 ingress-nginx-admission-patch-92jcm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-042725 describe pod ingress-nginx-admission-create-p6q55 ingress-nginx-admission-patch-92jcm: exit status 1 (57.329682ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-p6q55" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-92jcm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-042725 describe pod ingress-nginx-admission-create-p6q55 ingress-nginx-admission-patch-92jcm: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (232.332618ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:10:47.570393  371455 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:10:47.570658  371455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:10:47.570667  371455 out.go:374] Setting ErrFile to fd 2...
	I1019 12:10:47.570671  371455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:10:47.570854  371455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:10:47.571120  371455 mustload.go:65] Loading cluster: addons-042725
	I1019 12:10:47.571455  371455 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:10:47.571468  371455 addons.go:606] checking whether the cluster is paused
	I1019 12:10:47.571546  371455 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:10:47.571557  371455 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:10:47.571932  371455 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:10:47.590699  371455 ssh_runner.go:195] Run: systemctl --version
	I1019 12:10:47.590766  371455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:10:47.608188  371455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:10:47.702957  371455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:10:47.703039  371455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:10:47.732880  371455 cri.go:89] found id: "9c6bafa828a57d417b096987e633cf43595107d57fa768fd10027ea90e805cce"
	I1019 12:10:47.732915  371455 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:10:47.732922  371455 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:10:47.732928  371455 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:10:47.732932  371455 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:10:47.732937  371455 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:10:47.732941  371455 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:10:47.732945  371455 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:10:47.732949  371455 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:10:47.732965  371455 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:10:47.732974  371455 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:10:47.732977  371455 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:10:47.732981  371455 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:10:47.732985  371455 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:10:47.732989  371455 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:10:47.732998  371455 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:10:47.733003  371455 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:10:47.733007  371455 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:10:47.733010  371455 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:10:47.733012  371455 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:10:47.733014  371455 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:10:47.733016  371455 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:10:47.733019  371455 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:10:47.733021  371455 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:10:47.733023  371455 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:10:47.733025  371455 cri.go:89] found id: ""
	I1019 12:10:47.733083  371455 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:10:47.746985  371455 out.go:203] 
	W1019 12:10:47.748081  371455 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:10:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:10:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:10:47.748100  371455 out.go:285] * 
	* 
	W1019 12:10:47.752113  371455 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:10:47.753371  371455 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable ingress --alsologtostderr -v=1: exit status 11 (229.723318ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:10:47.802906  371518 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:10:47.803165  371518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:10:47.803175  371518 out.go:374] Setting ErrFile to fd 2...
	I1019 12:10:47.803180  371518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:10:47.803368  371518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:10:47.803677  371518 mustload.go:65] Loading cluster: addons-042725
	I1019 12:10:47.804052  371518 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:10:47.804072  371518 addons.go:606] checking whether the cluster is paused
	I1019 12:10:47.804173  371518 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:10:47.804190  371518 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:10:47.804649  371518 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:10:47.822538  371518 ssh_runner.go:195] Run: systemctl --version
	I1019 12:10:47.822611  371518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:10:47.839595  371518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:10:47.933893  371518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:10:47.933975  371518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:10:47.963759  371518 cri.go:89] found id: "9c6bafa828a57d417b096987e633cf43595107d57fa768fd10027ea90e805cce"
	I1019 12:10:47.963779  371518 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:10:47.963783  371518 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:10:47.963786  371518 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:10:47.963789  371518 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:10:47.963792  371518 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:10:47.963795  371518 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:10:47.963798  371518 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:10:47.963800  371518 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:10:47.963806  371518 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:10:47.963808  371518 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:10:47.963810  371518 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:10:47.963813  371518 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:10:47.963815  371518 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:10:47.963824  371518 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:10:47.963840  371518 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:10:47.963845  371518 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:10:47.963849  371518 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:10:47.963851  371518 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:10:47.963854  371518 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:10:47.963856  371518 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:10:47.963859  371518 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:10:47.963861  371518 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:10:47.963863  371518 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:10:47.963865  371518 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:10:47.963868  371518 cri.go:89] found id: ""
	I1019 12:10:47.963904  371518 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:10:47.977523  371518 out.go:203] 
	W1019 12:10:47.978738  371518 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:10:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:10:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:10:47.978760  371518 out.go:285] * 
	* 
	W1019 12:10:47.982686  371518 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:10:47.983869  371518 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-tfffr" [17c6c67c-fc61-4078-b4be-87597180d44d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003355753s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (230.915863ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:08:26.995888  368005 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:08:26.996120  368005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:26.996128  368005 out.go:374] Setting ErrFile to fd 2...
	I1019 12:08:26.996132  368005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:26.996326  368005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:08:26.996611  368005 mustload.go:65] Loading cluster: addons-042725
	I1019 12:08:26.996924  368005 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:26.996939  368005 addons.go:606] checking whether the cluster is paused
	I1019 12:08:26.997016  368005 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:26.997029  368005 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:08:26.997447  368005 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:08:27.016194  368005 ssh_runner.go:195] Run: systemctl --version
	I1019 12:08:27.016252  368005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:08:27.033307  368005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:08:27.127084  368005 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:08:27.127191  368005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:08:27.155876  368005 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:08:27.155898  368005 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:08:27.155902  368005 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:08:27.155906  368005 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:08:27.155909  368005 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:08:27.155912  368005 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:08:27.155915  368005 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:08:27.155918  368005 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:08:27.155920  368005 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:08:27.155935  368005 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:08:27.155940  368005 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:08:27.155944  368005 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:08:27.155953  368005 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:08:27.155965  368005 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:08:27.155972  368005 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:08:27.155978  368005 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:08:27.155984  368005 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:08:27.155988  368005 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:08:27.155991  368005 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:08:27.155993  368005 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:08:27.155996  368005 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:08:27.155998  368005 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:08:27.156000  368005 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:08:27.156002  368005 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:08:27.156011  368005 cri.go:89] found id: ""
	I1019 12:08:27.156059  368005 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:08:27.169865  368005 out.go:203] 
	W1019 12:08:27.171152  368005 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:08:27.171175  368005 out.go:285] * 
	* 
	W1019 12:08:27.175034  368005 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:08:27.176538  368005 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.162018ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-m56bv" [954b468e-a39d-4596-b8a9-62f10f5aa910] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003013606s
addons_test.go:463: (dbg) Run:  kubectl --context addons-042725 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (232.948073ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:08:15.475335  366173 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:08:15.475601  366173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:15.475611  366173 out.go:374] Setting ErrFile to fd 2...
	I1019 12:08:15.475614  366173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:15.475835  366173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:08:15.476099  366173 mustload.go:65] Loading cluster: addons-042725
	I1019 12:08:15.476410  366173 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:15.476436  366173 addons.go:606] checking whether the cluster is paused
	I1019 12:08:15.476518  366173 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:15.476531  366173 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:08:15.476898  366173 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:08:15.496118  366173 ssh_runner.go:195] Run: systemctl --version
	I1019 12:08:15.496177  366173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:08:15.513634  366173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:08:15.608572  366173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:08:15.608647  366173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:08:15.637576  366173 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:08:15.637616  366173 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:08:15.637620  366173 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:08:15.637623  366173 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:08:15.637626  366173 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:08:15.637630  366173 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:08:15.637633  366173 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:08:15.637637  366173 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:08:15.637641  366173 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:08:15.637654  366173 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:08:15.637662  366173 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:08:15.637666  366173 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:08:15.637670  366173 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:08:15.637679  366173 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:08:15.637683  366173 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:08:15.637695  366173 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:08:15.637700  366173 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:08:15.637707  366173 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:08:15.637710  366173 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:08:15.637712  366173 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:08:15.637714  366173 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:08:15.637717  366173 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:08:15.637719  366173 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:08:15.637722  366173 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:08:15.637724  366173 cri.go:89] found id: ""
	I1019 12:08:15.637778  366173 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:08:15.651972  366173 out.go:203] 
	W1019 12:08:15.653203  366173 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:08:15.653227  366173 out.go:285] * 
	* 
	W1019 12:08:15.657190  366173 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:08:15.658591  366173 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.04s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1019 12:08:15.665628  355262 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1019 12:08:15.669011  355262 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1019 12:08:15.669041  355262 kapi.go:107] duration metric: took 3.445603ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.458603ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-042725 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-042725 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [4cd8e67a-c63e-433d-b9d1-ac4315582d89] Pending
helpers_test.go:352: "task-pv-pod" [4cd8e67a-c63e-433d-b9d1-ac4315582d89] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [4cd8e67a-c63e-433d-b9d1-ac4315582d89] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003757131s
addons_test.go:572: (dbg) Run:  kubectl --context addons-042725 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-042725 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-042725 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-042725 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-042725 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-042725 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-042725 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [667beea4-e25c-42b2-9062-6213236ce3cc] Pending
helpers_test.go:352: "task-pv-pod-restore" [667beea4-e25c-42b2-9062-6213236ce3cc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [667beea4-e25c-42b2-9062-6213236ce3cc] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004048599s
addons_test.go:614: (dbg) Run:  kubectl --context addons-042725 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-042725 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-042725 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (233.110968ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:09:04.273660  369349 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:09:04.273908  369349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:09:04.273916  369349 out.go:374] Setting ErrFile to fd 2...
	I1019 12:09:04.273920  369349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:09:04.274120  369349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:09:04.274359  369349 mustload.go:65] Loading cluster: addons-042725
	I1019 12:09:04.274706  369349 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:09:04.274722  369349 addons.go:606] checking whether the cluster is paused
	I1019 12:09:04.274802  369349 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:09:04.274814  369349 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:09:04.275177  369349 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:09:04.293371  369349 ssh_runner.go:195] Run: systemctl --version
	I1019 12:09:04.293417  369349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:09:04.310726  369349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:09:04.405218  369349 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:09:04.405288  369349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:09:04.437290  369349 cri.go:89] found id: "9c6bafa828a57d417b096987e633cf43595107d57fa768fd10027ea90e805cce"
	I1019 12:09:04.437341  369349 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:09:04.437347  369349 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:09:04.437352  369349 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:09:04.437359  369349 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:09:04.437364  369349 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:09:04.437369  369349 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:09:04.437372  369349 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:09:04.437376  369349 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:09:04.437386  369349 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:09:04.437389  369349 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:09:04.437391  369349 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:09:04.437394  369349 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:09:04.437397  369349 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:09:04.437401  369349 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:09:04.437416  369349 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:09:04.437438  369349 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:09:04.437444  369349 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:09:04.437449  369349 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:09:04.437452  369349 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:09:04.437457  369349 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:09:04.437461  369349 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:09:04.437465  369349 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:09:04.437480  369349 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:09:04.437488  369349 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:09:04.437491  369349 cri.go:89] found id: ""
	I1019 12:09:04.437554  369349 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:09:04.451807  369349 out.go:203] 
	W1019 12:09:04.453237  369349 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:09:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:09:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:09:04.453257  369349 out.go:285] * 
	* 
	W1019 12:09:04.457229  369349 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:09:04.458513  369349 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (243.309955ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:09:04.516486  369413 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:09:04.516783  369413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:09:04.516794  369413 out.go:374] Setting ErrFile to fd 2...
	I1019 12:09:04.516798  369413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:09:04.517022  369413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:09:04.517360  369413 mustload.go:65] Loading cluster: addons-042725
	I1019 12:09:04.517881  369413 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:09:04.517907  369413 addons.go:606] checking whether the cluster is paused
	I1019 12:09:04.518046  369413 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:09:04.518064  369413 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:09:04.518679  369413 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:09:04.537111  369413 ssh_runner.go:195] Run: systemctl --version
	I1019 12:09:04.537169  369413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:09:04.555795  369413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:09:04.651076  369413 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:09:04.651178  369413 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:09:04.680791  369413 cri.go:89] found id: "9c6bafa828a57d417b096987e633cf43595107d57fa768fd10027ea90e805cce"
	I1019 12:09:04.680823  369413 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:09:04.680827  369413 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:09:04.680830  369413 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:09:04.680832  369413 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:09:04.680837  369413 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:09:04.680839  369413 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:09:04.680842  369413 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:09:04.680849  369413 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:09:04.680862  369413 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:09:04.680864  369413 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:09:04.680867  369413 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:09:04.680870  369413 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:09:04.680873  369413 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:09:04.680876  369413 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:09:04.680889  369413 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:09:04.680896  369413 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:09:04.680901  369413 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:09:04.680903  369413 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:09:04.680906  369413 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:09:04.680940  369413 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:09:04.680949  369413 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:09:04.680952  369413 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:09:04.680956  369413 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:09:04.680959  369413 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:09:04.680962  369413 cri.go:89] found id: ""
	I1019 12:09:04.681016  369413 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:09:04.695698  369413 out.go:203] 
	W1019 12:09:04.697017  369413 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:09:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:09:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:09:04.697035  369413 out.go:285] * 
	* 
	W1019 12:09:04.700942  369413 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:09:04.702276  369413 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (49.04s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-042725 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-042725 --alsologtostderr -v=1: exit status 11 (230.749728ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:08:10.404978  365231 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:08:10.405096  365231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:10.405108  365231 out.go:374] Setting ErrFile to fd 2...
	I1019 12:08:10.405115  365231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:10.405296  365231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:08:10.405628  365231 mustload.go:65] Loading cluster: addons-042725
	I1019 12:08:10.405976  365231 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:10.405995  365231 addons.go:606] checking whether the cluster is paused
	I1019 12:08:10.406106  365231 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:10.406123  365231 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:08:10.406610  365231 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:08:10.424713  365231 ssh_runner.go:195] Run: systemctl --version
	I1019 12:08:10.424766  365231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:08:10.442051  365231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:08:10.536177  365231 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:08:10.536288  365231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:08:10.564690  365231 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:08:10.564714  365231 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:08:10.564719  365231 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:08:10.564722  365231 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:08:10.564724  365231 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:08:10.564728  365231 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:08:10.564730  365231 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:08:10.564733  365231 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:08:10.564735  365231 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:08:10.564741  365231 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:08:10.564744  365231 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:08:10.564748  365231 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:08:10.564751  365231 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:08:10.564755  365231 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:08:10.564758  365231 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:08:10.564764  365231 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:08:10.564766  365231 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:08:10.564771  365231 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:08:10.564773  365231 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:08:10.564775  365231 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:08:10.564780  365231 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:08:10.564783  365231 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:08:10.564785  365231 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:08:10.564787  365231 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:08:10.564789  365231 cri.go:89] found id: ""
	I1019 12:08:10.564835  365231 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:08:10.578857  365231 out.go:203] 
	W1019 12:08:10.580163  365231 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:08:10.580183  365231 out.go:285] * 
	* 
	W1019 12:08:10.584235  365231 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:08:10.585643  365231 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-042725 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-042725
helpers_test.go:243: (dbg) docker inspect addons-042725:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4",
	        "Created": "2025-10-19T12:05:45.305517142Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 357254,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:05:45.341931582Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4/hosts",
	        "LogPath": "/var/lib/docker/containers/f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4/f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4-json.log",
	        "Name": "/addons-042725",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-042725:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-042725",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f0962584dd5d175ba9e543890fa53aa02ceb084041959f261711e3a1618f20a4",
	                "LowerDir": "/var/lib/docker/overlay2/a64981fbd7acf47b0c8941e1289b39bd94c3acbccb56f6d65603f5ef7ee03fe8-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a64981fbd7acf47b0c8941e1289b39bd94c3acbccb56f6d65603f5ef7ee03fe8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a64981fbd7acf47b0c8941e1289b39bd94c3acbccb56f6d65603f5ef7ee03fe8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a64981fbd7acf47b0c8941e1289b39bd94c3acbccb56f6d65603f5ef7ee03fe8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-042725",
	                "Source": "/var/lib/docker/volumes/addons-042725/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-042725",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-042725",
	                "name.minikube.sigs.k8s.io": "addons-042725",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "62e03c13e2e6bec5ee9197f03f522bee707bae2e6d6e6af712f0f688e2de996c",
	            "SandboxKey": "/var/run/docker/netns/62e03c13e2e6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-042725": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:58:af:55:9d:76",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "72895bb5262d44434cac86093316b6324cc823786d71e0451c062b6c4dad043c",
	                    "EndpointID": "f7da72f0e5832dc751a154b659d2ce0ff9de14d2eac9969f1add0e403856235c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-042725",
	                        "f0962584dd5d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-042725 -n addons-042725
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-042725 logs -n 25: (1.097802519s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-122372 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-122372   │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:05 UTC │
	│ delete  │ -p download-only-122372                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-122372   │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:05 UTC │
	│ start   │ -o=json --download-only -p download-only-296979 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-296979   │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:05 UTC │
	│ delete  │ -p download-only-296979                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-296979   │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:05 UTC │
	│ delete  │ -p download-only-122372                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-122372   │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:05 UTC │
	│ delete  │ -p download-only-296979                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-296979   │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:05 UTC │
	│ start   │ --download-only -p download-docker-580627 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-580627 │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │                     │
	│ delete  │ -p download-docker-580627                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-580627 │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:05 UTC │
	│ start   │ --download-only -p binary-mirror-904842 --alsologtostderr --binary-mirror http://127.0.0.1:34101 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-904842   │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │                     │
	│ delete  │ -p binary-mirror-904842                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-904842   │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:05 UTC │
	│ addons  │ enable dashboard -p addons-042725                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-042725          │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │                     │
	│ addons  │ disable dashboard -p addons-042725                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-042725          │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │                     │
	│ start   │ -p addons-042725 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-042725          │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:08 UTC │
	│ addons  │ addons-042725 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-042725          │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ addons-042725 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-042725          │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	│ addons  │ enable headlamp -p addons-042725 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-042725          │ jenkins │ v1.37.0 │ 19 Oct 25 12:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:05:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:05:22.402114  356592 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:05:22.402364  356592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:05:22.402373  356592 out.go:374] Setting ErrFile to fd 2...
	I1019 12:05:22.402377  356592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:05:22.402558  356592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:05:22.403073  356592 out.go:368] Setting JSON to false
	I1019 12:05:22.403984  356592 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6470,"bootTime":1760869052,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:05:22.404061  356592 start.go:141] virtualization: kvm guest
	I1019 12:05:22.405823  356592 out.go:179] * [addons-042725] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:05:22.407575  356592 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:05:22.407585  356592 notify.go:220] Checking for updates...
	I1019 12:05:22.409770  356592 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:05:22.410950  356592 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:05:22.412145  356592 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:05:22.413523  356592 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:05:22.414649  356592 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:05:22.415977  356592 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:05:22.438652  356592 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:05:22.438742  356592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:05:22.492153  356592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-19 12:05:22.482164439 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:05:22.492249  356592 docker.go:318] overlay module found
	I1019 12:05:22.494014  356592 out.go:179] * Using the docker driver based on user configuration
	I1019 12:05:22.495123  356592 start.go:305] selected driver: docker
	I1019 12:05:22.495135  356592 start.go:925] validating driver "docker" against <nil>
	I1019 12:05:22.495146  356592 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:05:22.495751  356592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:05:22.550628  356592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-19 12:05:22.541359516 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:05:22.550791  356592 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:05:22.550998  356592 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:05:22.552697  356592 out.go:179] * Using Docker driver with root privileges
	I1019 12:05:22.553879  356592 cni.go:84] Creating CNI manager for ""
	I1019 12:05:22.553940  356592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:05:22.553951  356592 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:05:22.554010  356592 start.go:349] cluster config:
	{Name:addons-042725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-042725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1019 12:05:22.555200  356592 out.go:179] * Starting "addons-042725" primary control-plane node in "addons-042725" cluster
	I1019 12:05:22.556225  356592 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:05:22.557328  356592 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:05:22.558392  356592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:05:22.558460  356592 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:05:22.558466  356592 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:05:22.558487  356592 cache.go:58] Caching tarball of preloaded images
	I1019 12:05:22.558604  356592 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:05:22.558620  356592 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:05:22.558960  356592 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/config.json ...
	I1019 12:05:22.558991  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/config.json: {Name:mk683788e7d3d89c0ee0bc8e7707ffe5a1bcd2b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:22.575359  356592 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1019 12:05:22.575522  356592 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1019 12:05:22.575543  356592 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1019 12:05:22.575548  356592 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1019 12:05:22.575555  356592 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1019 12:05:22.575561  356592 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1019 12:05:34.775726  356592 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1019 12:05:34.775774  356592 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:05:34.775813  356592 start.go:360] acquireMachinesLock for addons-042725: {Name:mk2d91f51d8b1754188cdced2792e6e9ca0fe32c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:05:34.775931  356592 start.go:364] duration metric: took 90.196µs to acquireMachinesLock for "addons-042725"
	I1019 12:05:34.775964  356592 start.go:93] Provisioning new machine with config: &{Name:addons-042725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-042725 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:05:34.776040  356592 start.go:125] createHost starting for "" (driver="docker")
	I1019 12:05:34.777640  356592 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1019 12:05:34.777898  356592 start.go:159] libmachine.API.Create for "addons-042725" (driver="docker")
	I1019 12:05:34.777936  356592 client.go:168] LocalClient.Create starting
	I1019 12:05:34.778069  356592 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem
	I1019 12:05:35.131911  356592 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem
	I1019 12:05:35.373857  356592 cli_runner.go:164] Run: docker network inspect addons-042725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:05:35.391374  356592 cli_runner.go:211] docker network inspect addons-042725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:05:35.391467  356592 network_create.go:284] running [docker network inspect addons-042725] to gather additional debugging logs...
	I1019 12:05:35.391495  356592 cli_runner.go:164] Run: docker network inspect addons-042725
	W1019 12:05:35.408546  356592 cli_runner.go:211] docker network inspect addons-042725 returned with exit code 1
	I1019 12:05:35.408580  356592 network_create.go:287] error running [docker network inspect addons-042725]: docker network inspect addons-042725: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-042725 not found
	I1019 12:05:35.408597  356592 network_create.go:289] output of [docker network inspect addons-042725]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-042725 not found
	
	** /stderr **
	I1019 12:05:35.408732  356592 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:05:35.426338  356592 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cd8db0}
	I1019 12:05:35.426378  356592 network_create.go:124] attempt to create docker network addons-042725 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1019 12:05:35.426440  356592 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-042725 addons-042725
	I1019 12:05:35.481999  356592 network_create.go:108] docker network addons-042725 192.168.49.0/24 created
	I1019 12:05:35.482038  356592 kic.go:121] calculated static IP "192.168.49.2" for the "addons-042725" container
	I1019 12:05:35.482102  356592 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:05:35.499467  356592 cli_runner.go:164] Run: docker volume create addons-042725 --label name.minikube.sigs.k8s.io=addons-042725 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:05:35.517137  356592 oci.go:103] Successfully created a docker volume addons-042725
	I1019 12:05:35.517209  356592 cli_runner.go:164] Run: docker run --rm --name addons-042725-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-042725 --entrypoint /usr/bin/test -v addons-042725:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:05:40.947214  356592 cli_runner.go:217] Completed: docker run --rm --name addons-042725-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-042725 --entrypoint /usr/bin/test -v addons-042725:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (5.429946825s)
	I1019 12:05:40.947250  356592 oci.go:107] Successfully prepared a docker volume addons-042725
	I1019 12:05:40.947278  356592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:05:40.947297  356592 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:05:40.947362  356592 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-042725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 12:05:45.234525  356592 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-042725:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.287098266s)
	I1019 12:05:45.234561  356592 kic.go:203] duration metric: took 4.287258224s to extract preloaded images to volume ...
	W1019 12:05:45.234676  356592 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 12:05:45.234715  356592 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 12:05:45.234766  356592 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 12:05:45.290457  356592 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-042725 --name addons-042725 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-042725 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-042725 --network addons-042725 --ip 192.168.49.2 --volume addons-042725:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 12:05:45.550560  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Running}}
	I1019 12:05:45.567977  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:05:45.585336  356592 cli_runner.go:164] Run: docker exec addons-042725 stat /var/lib/dpkg/alternatives/iptables
	I1019 12:05:45.629896  356592 oci.go:144] the created container "addons-042725" has a running status.
	I1019 12:05:45.629931  356592 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa...
	I1019 12:05:45.862628  356592 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 12:05:45.890026  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:05:45.911140  356592 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 12:05:45.911166  356592 kic_runner.go:114] Args: [docker exec --privileged addons-042725 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 12:05:45.956386  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:05:45.974321  356592 machine.go:93] provisionDockerMachine start ...
	I1019 12:05:45.974416  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:45.992969  356592 main.go:141] libmachine: Using SSH client type: native
	I1019 12:05:45.993208  356592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:05:45.993221  356592 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:05:46.127217  356592 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-042725
	
	I1019 12:05:46.127251  356592 ubuntu.go:182] provisioning hostname "addons-042725"
	I1019 12:05:46.127333  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:46.146050  356592 main.go:141] libmachine: Using SSH client type: native
	I1019 12:05:46.146361  356592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:05:46.146385  356592 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-042725 && echo "addons-042725" | sudo tee /etc/hostname
	I1019 12:05:46.289886  356592 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-042725
	
	I1019 12:05:46.289953  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:46.309356  356592 main.go:141] libmachine: Using SSH client type: native
	I1019 12:05:46.309614  356592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:05:46.309632  356592 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-042725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-042725/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-042725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:05:46.441952  356592 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:05:46.441978  356592 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:05:46.442014  356592 ubuntu.go:190] setting up certificates
	I1019 12:05:46.442027  356592 provision.go:84] configureAuth start
	I1019 12:05:46.442081  356592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-042725
	I1019 12:05:46.459541  356592 provision.go:143] copyHostCerts
	I1019 12:05:46.459612  356592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:05:46.459732  356592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:05:46.459792  356592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:05:46.459905  356592 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.addons-042725 san=[127.0.0.1 192.168.49.2 addons-042725 localhost minikube]
	I1019 12:05:47.016316  356592 provision.go:177] copyRemoteCerts
	I1019 12:05:47.016386  356592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:05:47.016439  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:47.033986  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:05:47.128371  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 12:05:47.146531  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:05:47.163327  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:05:47.179976  356592 provision.go:87] duration metric: took 737.929126ms to configureAuth
	I1019 12:05:47.180001  356592 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:05:47.180167  356592 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:05:47.180266  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:47.197964  356592 main.go:141] libmachine: Using SSH client type: native
	I1019 12:05:47.198205  356592 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1019 12:05:47.198233  356592 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:05:47.439172  356592 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:05:47.439208  356592 machine.go:96] duration metric: took 1.464857601s to provisionDockerMachine
	I1019 12:05:47.439221  356592 client.go:171] duration metric: took 12.661273606s to LocalClient.Create
	I1019 12:05:47.439248  356592 start.go:167] duration metric: took 12.661350449s to libmachine.API.Create "addons-042725"
	I1019 12:05:47.439260  356592 start.go:293] postStartSetup for "addons-042725" (driver="docker")
	I1019 12:05:47.439276  356592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:05:47.439356  356592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:05:47.439404  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:47.457237  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:05:47.554134  356592 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:05:47.557567  356592 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:05:47.557606  356592 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:05:47.557620  356592 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:05:47.557676  356592 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:05:47.557703  356592 start.go:296] duration metric: took 118.432853ms for postStartSetup
	I1019 12:05:47.557973  356592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-042725
	I1019 12:05:47.574799  356592 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/config.json ...
	I1019 12:05:47.575062  356592 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:05:47.575101  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:47.591974  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:05:47.683342  356592 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:05:47.687782  356592 start.go:128] duration metric: took 12.911726122s to createHost
	I1019 12:05:47.687807  356592 start.go:83] releasing machines lock for "addons-042725", held for 12.911861976s
	I1019 12:05:47.687879  356592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-042725
	I1019 12:05:47.704631  356592 ssh_runner.go:195] Run: cat /version.json
	I1019 12:05:47.704678  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:47.704683  356592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:05:47.704760  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:05:47.722251  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:05:47.722589  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:05:47.865764  356592 ssh_runner.go:195] Run: systemctl --version
	I1019 12:05:47.871965  356592 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:05:47.905088  356592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:05:47.909579  356592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:05:47.909650  356592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:05:47.934301  356592 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 12:05:47.934330  356592 start.go:495] detecting cgroup driver to use...
	I1019 12:05:47.934368  356592 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:05:47.934441  356592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:05:47.950407  356592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:05:47.962410  356592 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:05:47.962481  356592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:05:47.978505  356592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:05:47.995545  356592 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:05:48.074725  356592 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:05:48.160056  356592 docker.go:234] disabling docker service ...
	I1019 12:05:48.160122  356592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:05:48.178795  356592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:05:48.190992  356592 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:05:48.271185  356592 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:05:48.348568  356592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:05:48.360746  356592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:05:48.374852  356592 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:05:48.374907  356592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.384778  356592 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:05:48.384845  356592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.393212  356592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.401417  356592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.409762  356592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:05:48.417399  356592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.425693  356592 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.438716  356592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:05:48.447060  356592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:05:48.454000  356592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:05:48.460782  356592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:05:48.535144  356592 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:05:48.638102  356592 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:05:48.638180  356592 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:05:48.641921  356592 start.go:563] Will wait 60s for crictl version
	I1019 12:05:48.641985  356592 ssh_runner.go:195] Run: which crictl
	I1019 12:05:48.645341  356592 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:05:48.668927  356592 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:05:48.669013  356592 ssh_runner.go:195] Run: crio --version
	I1019 12:05:48.696373  356592 ssh_runner.go:195] Run: crio --version
	I1019 12:05:48.725516  356592 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:05:48.726592  356592 cli_runner.go:164] Run: docker network inspect addons-042725 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:05:48.742907  356592 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1019 12:05:48.746898  356592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:05:48.756755  356592 kubeadm.go:883] updating cluster {Name:addons-042725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-042725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:05:48.756871  356592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:05:48.756914  356592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:05:48.787541  356592 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:05:48.787563  356592 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:05:48.787612  356592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:05:48.812563  356592 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:05:48.812587  356592 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:05:48.812597  356592 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1019 12:05:48.812714  356592 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-042725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-042725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:05:48.812796  356592 ssh_runner.go:195] Run: crio config
	I1019 12:05:48.856827  356592 cni.go:84] Creating CNI manager for ""
	I1019 12:05:48.856863  356592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:05:48.856887  356592 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:05:48.856920  356592 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-042725 NodeName:addons-042725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:05:48.857067  356592 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-042725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:05:48.857140  356592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:05:48.865234  356592 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:05:48.865287  356592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:05:48.872778  356592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1019 12:05:48.884995  356592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:05:48.899280  356592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1019 12:05:48.910873  356592 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:05:48.914149  356592 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:05:48.923401  356592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:05:49.002731  356592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:05:49.027657  356592 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725 for IP: 192.168.49.2
	I1019 12:05:49.027687  356592 certs.go:195] generating shared ca certs ...
	I1019 12:05:49.027709  356592 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.027839  356592 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:05:49.090535  356592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt ...
	I1019 12:05:49.090562  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt: {Name:mkd44fe82d6d6779a4a67d121d283099df4db026 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.090721  356592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key ...
	I1019 12:05:49.090732  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key: {Name:mk380494cdd431ba8cbb4d01406505021bbb0953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.090804  356592 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:05:49.262375  356592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt ...
	I1019 12:05:49.262406  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt: {Name:mkdf9176b4ad4411024ab0785072334d4363e41f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.262576  356592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key ...
	I1019 12:05:49.262588  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key: {Name:mkd5ac799295c2b01a1de6ff9fdfeb6b58ec5937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.262655  356592 certs.go:257] generating profile certs ...
	I1019 12:05:49.262719  356592 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.key
	I1019 12:05:49.262733  356592 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt with IP's: []
	I1019 12:05:49.397758  356592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt ...
	I1019 12:05:49.397790  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: {Name:mk393a8dc45ccf6aae18a2f9497e245b173e789b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.397959  356592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.key ...
	I1019 12:05:49.397970  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.key: {Name:mk1235248ab232563a3bb7c23927a3348ed9ad9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.398046  356592 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.key.3a045047
	I1019 12:05:49.398065  356592 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.crt.3a045047 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1019 12:05:49.611799  356592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.crt.3a045047 ...
	I1019 12:05:49.611834  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.crt.3a045047: {Name:mk7dc1bdfb6eda20fd91773733d1306f7614411f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.611996  356592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.key.3a045047 ...
	I1019 12:05:49.612010  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.key.3a045047: {Name:mk25c52b11c31df91183d40f2c11556c73cb6972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.612081  356592 certs.go:382] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.crt.3a045047 -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.crt
	I1019 12:05:49.612197  356592 certs.go:386] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.key.3a045047 -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.key
	I1019 12:05:49.612265  356592 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.key
	I1019 12:05:49.612287  356592 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.crt with IP's: []
	I1019 12:05:49.827646  356592 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.crt ...
	I1019 12:05:49.827675  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.crt: {Name:mkcc083d5799af1a3dbeac7ea5e0a3de01075ed9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.827847  356592 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.key ...
	I1019 12:05:49.827860  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.key: {Name:mk7a4c4f5aa9871ccbc9fbf756b87b65d01a5e53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:05:49.828049  356592 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:05:49.828083  356592 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:05:49.828106  356592 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:05:49.828128  356592 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:05:49.828778  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:05:49.846393  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:05:49.863483  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:05:49.880249  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:05:49.897109  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 12:05:49.913354  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 12:05:49.929938  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:05:49.946497  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:05:49.963145  356592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:05:49.981247  356592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:05:49.993240  356592 ssh_runner.go:195] Run: openssl version
	I1019 12:05:49.999020  356592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:05:50.009285  356592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:05:50.012914  356592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:05:50.012960  356592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:05:50.046562  356592 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:05:50.055566  356592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:05:50.059043  356592 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 12:05:50.059108  356592 kubeadm.go:400] StartCluster: {Name:addons-042725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-042725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:05:50.059177  356592 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:05:50.059220  356592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:05:50.085335  356592 cri.go:89] found id: ""
	I1019 12:05:50.085407  356592 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:05:50.093475  356592 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:05:50.100953  356592 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 12:05:50.101008  356592 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:05:50.108371  356592 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 12:05:50.108387  356592 kubeadm.go:157] found existing configuration files:
	
	I1019 12:05:50.108444  356592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 12:05:50.115606  356592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 12:05:50.115676  356592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 12:05:50.122469  356592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 12:05:50.129651  356592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 12:05:50.129692  356592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:05:50.136673  356592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 12:05:50.143734  356592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 12:05:50.143775  356592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:05:50.150801  356592 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 12:05:50.157853  356592 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 12:05:50.157902  356592 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:05:50.164688  356592 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 12:05:50.200648  356592 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 12:05:50.200737  356592 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 12:05:50.220227  356592 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 12:05:50.220306  356592 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 12:05:50.220438  356592 kubeadm.go:318] OS: Linux
	I1019 12:05:50.220514  356592 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 12:05:50.220588  356592 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 12:05:50.220650  356592 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 12:05:50.220743  356592 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 12:05:50.220831  356592 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 12:05:50.220914  356592 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 12:05:50.221012  356592 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 12:05:50.221086  356592 kubeadm.go:318] CGROUPS_IO: enabled
	I1019 12:05:50.275803  356592 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 12:05:50.275950  356592 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 12:05:50.276069  356592 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 12:05:50.283625  356592 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 12:05:50.286355  356592 out.go:252]   - Generating certificates and keys ...
	I1019 12:05:50.286461  356592 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 12:05:50.286551  356592 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1019 12:05:50.453278  356592 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 12:05:50.586345  356592 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 12:05:50.979480  356592 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 12:05:51.129890  356592 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 12:05:51.667988  356592 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 12:05:51.668122  356592 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-042725 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 12:05:51.876369  356592 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 12:05:51.876568  356592 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-042725 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1019 12:05:51.892924  356592 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 12:05:51.961391  356592 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 12:05:52.057190  356592 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 12:05:52.057323  356592 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 12:05:52.174242  356592 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 12:05:52.447560  356592 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 12:05:52.659323  356592 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 12:05:52.772052  356592 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 12:05:52.899333  356592 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 12:05:52.899950  356592 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 12:05:52.903480  356592 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 12:05:52.904742  356592 out.go:252]   - Booting up control plane ...
	I1019 12:05:52.904866  356592 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 12:05:52.904972  356592 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 12:05:52.905721  356592 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 12:05:52.918783  356592 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 12:05:52.918905  356592 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 12:05:52.925204  356592 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 12:05:52.925548  356592 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 12:05:52.925594  356592 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 12:05:53.021904  356592 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 12:05:53.022050  356592 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 12:05:54.022574  356592 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000941629s
	I1019 12:05:54.025408  356592 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 12:05:54.025574  356592 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1019 12:05:54.025685  356592 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 12:05:54.025777  356592 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 12:05:55.241610  356592 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.215952193s
	I1019 12:05:55.886039  356592 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.860501155s
	I1019 12:05:57.527596  356592 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.502020548s
	I1019 12:05:57.538189  356592 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 12:05:57.547552  356592 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 12:05:57.555745  356592 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 12:05:57.556048  356592 kubeadm.go:318] [mark-control-plane] Marking the node addons-042725 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 12:05:57.564334  356592 kubeadm.go:318] [bootstrap-token] Using token: h8tkp4.5gchpu2ualu0x2ks
	I1019 12:05:57.565665  356592 out.go:252]   - Configuring RBAC rules ...
	I1019 12:05:57.565804  356592 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 12:05:57.568622  356592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 12:05:57.573338  356592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 12:05:57.575593  356592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 12:05:57.578795  356592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 12:05:57.581089  356592 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 12:05:57.934580  356592 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 12:05:58.347950  356592 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 12:05:58.932885  356592 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 12:05:58.933659  356592 kubeadm.go:318] 
	I1019 12:05:58.933730  356592 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 12:05:58.933754  356592 kubeadm.go:318] 
	I1019 12:05:58.933850  356592 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 12:05:58.933868  356592 kubeadm.go:318] 
	I1019 12:05:58.933909  356592 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 12:05:58.933991  356592 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 12:05:58.934069  356592 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 12:05:58.934079  356592 kubeadm.go:318] 
	I1019 12:05:58.934158  356592 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 12:05:58.934170  356592 kubeadm.go:318] 
	I1019 12:05:58.934208  356592 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 12:05:58.934214  356592 kubeadm.go:318] 
	I1019 12:05:58.934259  356592 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 12:05:58.934326  356592 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 12:05:58.934382  356592 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 12:05:58.934388  356592 kubeadm.go:318] 
	I1019 12:05:58.934528  356592 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 12:05:58.934619  356592 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 12:05:58.934630  356592 kubeadm.go:318] 
	I1019 12:05:58.934701  356592 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token h8tkp4.5gchpu2ualu0x2ks \
	I1019 12:05:58.934793  356592 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 \
	I1019 12:05:58.934815  356592 kubeadm.go:318] 	--control-plane 
	I1019 12:05:58.934822  356592 kubeadm.go:318] 
	I1019 12:05:58.934910  356592 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 12:05:58.934918  356592 kubeadm.go:318] 
	I1019 12:05:58.934983  356592 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token h8tkp4.5gchpu2ualu0x2ks \
	I1019 12:05:58.935068  356592 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 
	I1019 12:05:58.937647  356592 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 12:05:58.937754  356592 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 12:05:58.937769  356592 cni.go:84] Creating CNI manager for ""
	I1019 12:05:58.937777  356592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:05:58.940205  356592 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 12:05:58.941287  356592 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 12:05:58.945456  356592 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 12:05:58.945472  356592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 12:05:58.958457  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 12:05:59.155212  356592 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:05:59.155362  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:05:59.155405  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-042725 minikube.k8s.io/updated_at=2025_10_19T12_05_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=addons-042725 minikube.k8s.io/primary=true
	I1019 12:05:59.231534  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:05:59.240594  356592 ops.go:34] apiserver oom_adj: -16
	I1019 12:05:59.731737  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:00.232109  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:00.732668  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:01.231738  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:01.731905  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:02.231991  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:02.732557  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:03.231887  356592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:06:03.294305  356592 kubeadm.go:1113] duration metric: took 4.139025269s to wait for elevateKubeSystemPrivileges
	I1019 12:06:03.294350  356592 kubeadm.go:402] duration metric: took 13.235249068s to StartCluster
	I1019 12:06:03.294391  356592 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:06:03.294536  356592 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:06:03.294975  356592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:06:03.295171  356592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:06:03.295177  356592 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:06:03.295254  356592 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1019 12:06:03.295385  356592 addons.go:69] Setting yakd=true in profile "addons-042725"
	I1019 12:06:03.295405  356592 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-042725"
	I1019 12:06:03.295415  356592 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:06:03.295438  356592 addons.go:69] Setting registry=true in profile "addons-042725"
	I1019 12:06:03.295414  356592 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-042725"
	I1019 12:06:03.295454  356592 addons.go:238] Setting addon registry=true in "addons-042725"
	I1019 12:06:03.295429  356592 addons.go:238] Setting addon yakd=true in "addons-042725"
	I1019 12:06:03.295471  356592 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-042725"
	I1019 12:06:03.295441  356592 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-042725"
	I1019 12:06:03.295499  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.295504  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.295510  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.295523  356592 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-042725"
	I1019 12:06:03.295555  356592 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-042725"
	I1019 12:06:03.295576  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.295603  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.295951  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.296031  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.296043  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.296068  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.296117  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.296318  356592 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-042725"
	I1019 12:06:03.296342  356592 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-042725"
	I1019 12:06:03.296558  356592 addons.go:69] Setting registry-creds=true in profile "addons-042725"
	I1019 12:06:03.296616  356592 addons.go:238] Setting addon registry-creds=true in "addons-042725"
	I1019 12:06:03.296669  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.296917  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.296996  356592 addons.go:69] Setting cloud-spanner=true in profile "addons-042725"
	I1019 12:06:03.297018  356592 out.go:179] * Verifying Kubernetes components...
	I1019 12:06:03.297105  356592 addons.go:69] Setting volumesnapshots=true in profile "addons-042725"
	I1019 12:06:03.297126  356592 addons.go:238] Setting addon volumesnapshots=true in "addons-042725"
	I1019 12:06:03.297153  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.297197  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.297652  356592 addons.go:69] Setting ingress-dns=true in profile "addons-042725"
	I1019 12:06:03.297679  356592 addons.go:238] Setting addon ingress-dns=true in "addons-042725"
	I1019 12:06:03.297684  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.297715  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.297916  356592 addons.go:69] Setting inspektor-gadget=true in profile "addons-042725"
	I1019 12:06:03.297942  356592 addons.go:238] Setting addon inspektor-gadget=true in "addons-042725"
	I1019 12:06:03.297980  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.298139  356592 addons.go:69] Setting gcp-auth=true in profile "addons-042725"
	I1019 12:06:03.298165  356592 mustload.go:65] Loading cluster: addons-042725
	I1019 12:06:03.298227  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.298265  356592 addons.go:69] Setting storage-provisioner=true in profile "addons-042725"
	I1019 12:06:03.298295  356592 addons.go:238] Setting addon storage-provisioner=true in "addons-042725"
	I1019 12:06:03.298317  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.298411  356592 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:06:03.298499  356592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:06:03.298511  356592 addons.go:69] Setting metrics-server=true in profile "addons-042725"
	I1019 12:06:03.298529  356592 addons.go:238] Setting addon metrics-server=true in "addons-042725"
	I1019 12:06:03.298550  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.298703  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.298503  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.301961  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.302850  356592 addons.go:69] Setting ingress=true in profile "addons-042725"
	I1019 12:06:03.302874  356592 addons.go:238] Setting addon ingress=true in "addons-042725"
	I1019 12:06:03.302920  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.303325  356592 addons.go:69] Setting volcano=true in profile "addons-042725"
	I1019 12:06:03.303344  356592 addons.go:238] Setting addon volcano=true in "addons-042725"
	I1019 12:06:03.303353  356592 addons.go:69] Setting default-storageclass=true in profile "addons-042725"
	I1019 12:06:03.303372  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.303378  356592 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-042725"
	I1019 12:06:03.303407  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.303809  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.304209  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.297023  356592 addons.go:238] Setting addon cloud-spanner=true in "addons-042725"
	I1019 12:06:03.304716  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.311995  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.312476  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.351499  356592 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1019 12:06:03.351699  356592 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1019 12:06:03.351497  356592 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1019 12:06:03.353943  356592 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 12:06:03.354016  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1019 12:06:03.354100  356592 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1019 12:06:03.354110  356592 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1019 12:06:03.354166  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.354524  356592 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 12:06:03.354540  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1019 12:06:03.354585  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.354943  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.365602  356592 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1019 12:06:03.366867  356592 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 12:06:03.366893  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1019 12:06:03.366958  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.369815  356592 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1019 12:06:03.369815  356592 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1019 12:06:03.371275  356592 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1019 12:06:03.371298  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1019 12:06:03.371360  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.373090  356592 out.go:179]   - Using image docker.io/registry:3.0.0
	I1019 12:06:03.374297  356592 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1019 12:06:03.374322  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1019 12:06:03.374409  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.395179  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1019 12:06:03.395242  356592 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1019 12:06:03.399843  356592 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-042725"
	I1019 12:06:03.399905  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.400560  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.400980  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1019 12:06:03.402206  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1019 12:06:03.404997  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1019 12:06:03.405348  356592 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 12:06:03.405370  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1019 12:06:03.405457  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.407298  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1019 12:06:03.408408  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	W1019 12:06:03.409415  356592 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1019 12:06:03.409881  356592 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1019 12:06:03.409952  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1019 12:06:03.410012  356592 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1019 12:06:03.411132  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1019 12:06:03.410068  356592 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1019 12:06:03.414577  356592 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:06:03.414620  356592 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1019 12:06:03.414633  356592 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1019 12:06:03.414708  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.411794  356592 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1019 12:06:03.415585  356592 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1019 12:06:03.415661  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.411191  356592 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1019 12:06:03.416757  356592 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1019 12:06:03.418844  356592 addons.go:238] Setting addon default-storageclass=true in "addons-042725"
	I1019 12:06:03.418892  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.419370  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:03.419566  356592 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1019 12:06:03.419630  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.421986  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.423339  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1019 12:06:03.423358  356592 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1019 12:06:03.423438  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.423786  356592 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:06:03.424161  356592 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:06:03.425887  356592 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:06:03.425907  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:06:03.425962  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.426035  356592 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 12:06:03.426052  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1019 12:06:03.426116  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.433450  356592 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:06:03.438999  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:03.447524  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.453476  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.455659  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.460368  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.467690  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.491037  356592 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1019 12:06:03.493024  356592 out.go:179]   - Using image docker.io/busybox:stable
	I1019 12:06:03.494171  356592 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 12:06:03.494192  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1019 12:06:03.494307  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.494549  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.500626  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.500729  356592 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:06:03.501576  356592 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:06:03.500785  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.501651  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:03.505077  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.506647  356592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:06:03.512568  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.513224  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.517274  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	W1019 12:06:03.519638  356592 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1019 12:06:03.519951  356592 retry.go:31] will retry after 316.586718ms: ssh: handshake failed: EOF
	I1019 12:06:03.533327  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.545454  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:03.629790  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 12:06:03.632833  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1019 12:06:03.647678  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 12:06:03.648672  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 12:06:03.676014  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 12:06:03.688925  356592 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1019 12:06:03.689031  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1019 12:06:03.702225  356592 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1019 12:06:03.702256  356592 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1019 12:06:03.704207  356592 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1019 12:06:03.704283  356592 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1019 12:06:03.710035  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 12:06:03.710247  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1019 12:06:03.710272  356592 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1019 12:06:03.715144  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 12:06:03.716662  356592 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:03.716682  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1019 12:06:03.729680  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:06:03.734708  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:06:03.743469  356592 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1019 12:06:03.743497  356592 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1019 12:06:03.753398  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1019 12:06:03.753444  356592 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1019 12:06:03.761788  356592 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1019 12:06:03.761830  356592 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1019 12:06:03.774651  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:03.789013  356592 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1019 12:06:03.789046  356592 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1019 12:06:03.799785  356592 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 12:06:03.799811  356592 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1019 12:06:03.812733  356592 node_ready.go:35] waiting up to 6m0s for node "addons-042725" to be "Ready" ...
	I1019 12:06:03.813046  356592 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1019 12:06:03.814383  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1019 12:06:03.814470  356592 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1019 12:06:03.823872  356592 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1019 12:06:03.823928  356592 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1019 12:06:03.835956  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 12:06:03.851177  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1019 12:06:03.851221  356592 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1019 12:06:03.868769  356592 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1019 12:06:03.868795  356592 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1019 12:06:03.871658  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1019 12:06:03.871743  356592 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1019 12:06:03.896188  356592 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1019 12:06:03.896216  356592 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1019 12:06:03.949704  356592 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:06:03.949803  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1019 12:06:03.957240  356592 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1019 12:06:03.957263  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1019 12:06:03.967396  356592 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1019 12:06:03.967472  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1019 12:06:04.007979  356592 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1019 12:06:04.008135  356592 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1019 12:06:04.009293  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1019 12:06:04.017213  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 12:06:04.064727  356592 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1019 12:06:04.064759  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1019 12:06:04.117589  356592 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1019 12:06:04.117617  356592 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1019 12:06:04.122688  356592 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1019 12:06:04.122724  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1019 12:06:04.148921  356592 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 12:06:04.148947  356592 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1019 12:06:04.161054  356592 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1019 12:06:04.161154  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1019 12:06:04.212030  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1019 12:06:04.228336  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 12:06:04.320487  356592 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-042725" context rescaled to 1 replicas
	I1019 12:06:04.901320  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.186134766s)
	I1019 12:06:04.901689  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.171973805s)
	I1019 12:06:04.901746  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.166888747s)
	I1019 12:06:04.901985  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.127299676s)
	W1019 12:06:04.902023  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:04.902049  356592 retry.go:31] will retry after 231.049371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:04.902130  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.066085786s)
	I1019 12:06:04.902154  356592 addons.go:479] Verifying addon metrics-server=true in "addons-042725"
	I1019 12:06:04.903510  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.193366167s)
	I1019 12:06:04.903546  356592 addons.go:479] Verifying addon ingress=true in "addons-042725"
	I1019 12:06:04.904793  356592 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-042725 service yakd-dashboard -n yakd-dashboard
	
	I1019 12:06:04.904861  356592 out.go:179] * Verifying ingress addon...
	I1019 12:06:04.906779  356592 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1019 12:06:04.911510  356592 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	W1019 12:06:04.921387  356592 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class standard as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "standard": the object has been modified; please apply your changes to the latest version and try again]
	I1019 12:06:05.134098  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:05.360009  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.342569009s)
	W1019 12:06:05.360069  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 12:06:05.360085  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.148025281s)
	I1019 12:06:05.360095  356592 retry.go:31] will retry after 167.873129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 12:06:05.360105  356592 addons.go:479] Verifying addon registry=true in "addons-042725"
	I1019 12:06:05.360371  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.131927447s)
	I1019 12:06:05.360416  356592 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-042725"
	I1019 12:06:05.362046  356592 out.go:179] * Verifying csi-hostpath-driver addon...
	I1019 12:06:05.362064  356592 out.go:179] * Verifying registry addon...
	I1019 12:06:05.364225  356592 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1019 12:06:05.364225  356592 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1019 12:06:05.368117  356592 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 12:06:05.368144  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:05.369175  356592 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 12:06:05.369198  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:05.468219  356592 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1019 12:06:05.468240  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:05.528308  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1019 12:06:05.762026  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:05.762065  356592 retry.go:31] will retry after 245.711865ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 12:06:05.815237  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:05.867242  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:05.867335  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:05.909786  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:06.008906  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:06.368395  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:06.368406  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:06.469455  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:06.867355  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:06.867507  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:06.910144  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:07.368102  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:07.368116  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:07.409964  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:07.815684  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:07.867834  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:07.867932  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:07.909915  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:08.021191  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.492828476s)
	I1019 12:06:08.021261  356592 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.012322596s)
	W1019 12:06:08.021293  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:08.021313  356592 retry.go:31] will retry after 515.70648ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:08.367866  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:08.367899  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:08.409632  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:08.538068  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:08.867684  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:08.867722  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:08.909990  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:09.077167  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:09.077199  356592 retry.go:31] will retry after 944.52464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:09.367345  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:09.367501  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:09.410506  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:09.816346  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:09.867858  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:09.867947  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:09.910532  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:10.022556  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:10.367935  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:10.368089  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:10.409558  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:10.562777  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:10.562809  356592 retry.go:31] will retry after 1.228877817s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:10.867396  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:10.867500  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:10.910072  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:11.048870  356592 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1019 12:06:11.048959  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:11.067302  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:11.175981  356592 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1019 12:06:11.188344  356592 addons.go:238] Setting addon gcp-auth=true in "addons-042725"
	I1019 12:06:11.188440  356592 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:06:11.188985  356592 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:06:11.206940  356592 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1019 12:06:11.207010  356592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:06:11.225414  356592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:06:11.319803  356592 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 12:06:11.321061  356592 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1019 12:06:11.322254  356592 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1019 12:06:11.322271  356592 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1019 12:06:11.335526  356592 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1019 12:06:11.335550  356592 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1019 12:06:11.348320  356592 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 12:06:11.348346  356592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1019 12:06:11.361061  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 12:06:11.368404  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:11.368605  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:11.410601  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:11.665099  356592 addons.go:479] Verifying addon gcp-auth=true in "addons-042725"
	I1019 12:06:11.666540  356592 out.go:179] * Verifying gcp-auth addon...
	I1019 12:06:11.669163  356592 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1019 12:06:11.671534  356592 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1019 12:06:11.671552  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:11.792610  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:11.867751  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:11.867751  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:11.910628  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:12.172493  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:12.315774  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	W1019 12:06:12.328066  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:12.328093  356592 retry.go:31] will retry after 2.459662068s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:12.367856  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:12.367997  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:12.409818  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:12.672956  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:12.867032  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:12.867075  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:12.909940  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:13.172797  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:13.367544  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:13.367669  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:13.410849  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:13.672499  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:13.867765  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:13.867801  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:13.910644  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:14.172338  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:14.316062  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:14.367794  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:14.367822  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:14.410467  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:14.672100  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:14.788327  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:14.868204  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:14.868215  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:14.909965  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:15.173455  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:15.323282  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:15.323310  356592 retry.go:31] will retry after 2.538443314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:15.367091  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:15.367237  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:15.409811  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:15.672236  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:15.867591  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:15.867701  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:15.910443  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:16.172151  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:16.367807  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:16.367861  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:16.410485  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:16.672832  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:16.815378  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:16.867752  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:16.867783  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:16.910467  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:17.172243  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:17.367923  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:17.367984  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:17.409965  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:17.672741  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:17.862292  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:17.867285  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:17.867347  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:17.910190  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:18.172710  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:18.367011  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:18.367030  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:06:18.403487  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:18.403517  356592 retry.go:31] will retry after 3.500276456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:18.410311  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:18.672524  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:18.816261  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:18.867784  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:18.867898  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:18.910271  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:19.171886  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:19.367037  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:19.367033  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:19.409905  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:19.672675  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:19.867071  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:19.867082  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:19.909875  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:20.172768  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:20.366943  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:20.367053  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:20.409866  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:20.672990  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:20.867309  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:20.867337  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:20.909843  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:21.172575  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:21.316331  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:21.368113  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:21.368203  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:21.409736  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:21.672573  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:21.867714  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:21.867752  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:21.904925  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:21.909895  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:22.172899  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:22.367128  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:22.367174  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:22.409961  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:22.439035  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:22.439071  356592 retry.go:31] will retry after 8.473188125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:22.671745  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:22.867887  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:22.867974  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:22.909557  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:23.172110  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:23.367321  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:23.367357  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:23.410383  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:23.672278  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:23.816101  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:23.867782  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:23.867965  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:23.910888  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:24.172802  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:24.367014  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:24.367152  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:24.409887  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:24.673054  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:24.867243  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:24.867367  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:24.910131  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:25.172802  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:25.367077  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:25.367184  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:25.409829  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:25.672677  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:25.816308  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:25.867805  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:25.867866  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:25.909380  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:26.172229  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:26.368047  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:26.368047  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:26.409842  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:26.673069  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:26.867067  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:26.867236  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:26.909971  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:27.172661  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:27.367350  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:27.367506  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:27.410565  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:27.672249  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:27.867500  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:27.867552  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:27.910216  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:28.172950  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:28.315654  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:28.367126  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:28.367226  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:28.410015  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:28.673309  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:28.867943  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:28.867947  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:28.910512  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:29.172196  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:29.367729  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:29.367773  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:29.410545  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:29.672334  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:29.867767  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:29.867800  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:29.910399  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:30.172148  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:30.315862  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:30.367589  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:30.367659  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:30.410293  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:30.671956  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:30.867063  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:30.867236  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:30.909749  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:30.912795  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:31.171995  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:31.367772  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:31.367881  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:31.409532  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:31.446556  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:31.446593  356592 retry.go:31] will retry after 14.325800896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:31.672327  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:31.867431  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:31.867544  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:31.909983  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:32.172764  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:32.366965  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:32.366976  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:32.410094  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:32.672873  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:32.815331  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:32.867998  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:32.868073  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:32.909881  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:33.172721  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:33.367880  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:33.367967  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:33.410128  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:33.671851  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:33.867808  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:33.867930  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:33.909717  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:34.172173  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:34.367488  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:34.367540  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:34.410232  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:34.671909  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:34.867983  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:34.868067  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:34.909733  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:35.172549  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:35.316198  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:35.367600  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:35.367595  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:35.410312  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:35.672070  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:35.867128  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:35.867175  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:35.909671  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:36.172469  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:36.367933  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:36.368041  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:36.409738  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:36.672388  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:36.867560  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:36.867696  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:36.910309  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:37.171985  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:37.367740  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:37.367875  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:37.409481  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:37.672377  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:37.816077  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:37.867894  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:37.867968  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:37.910653  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:38.172497  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:38.367072  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:38.367097  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:38.409762  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:38.672582  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:38.867975  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:38.868097  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:38.910089  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:39.173116  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:39.367688  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:39.367864  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:39.410613  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:39.672372  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:39.816144  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:39.867549  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:39.867640  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:39.910588  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:40.172481  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:40.367579  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:40.367689  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:40.410279  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:40.671901  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:40.866826  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:40.866941  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:40.910553  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:41.172117  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:41.367396  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:41.367537  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:41.410111  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:41.672708  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:41.867973  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:41.868069  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:41.909829  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:42.172553  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 12:06:42.316052  356592 node_ready.go:57] node "addons-042725" has "Ready":"False" status (will retry)
	I1019 12:06:42.368110  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:42.368202  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:42.410079  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:42.672955  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:42.867107  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:42.867146  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:42.909647  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:43.172755  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:43.367118  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:43.367123  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:43.409980  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:43.672620  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:43.867670  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:43.867786  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:43.910548  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:44.172084  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:44.367491  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:44.367574  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:44.410305  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:44.672145  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:44.815240  356592 node_ready.go:49] node "addons-042725" is "Ready"
	I1019 12:06:44.815277  356592 node_ready.go:38] duration metric: took 41.002510053s for node "addons-042725" to be "Ready" ...
	I1019 12:06:44.815295  356592 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:06:44.815349  356592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:06:44.832372  356592 api_server.go:72] duration metric: took 41.537165788s to wait for apiserver process to appear ...
	I1019 12:06:44.832404  356592 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:06:44.832447  356592 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1019 12:06:44.837172  356592 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1019 12:06:44.838075  356592 api_server.go:141] control plane version: v1.34.1
	I1019 12:06:44.838100  356592 api_server.go:131] duration metric: took 5.688895ms to wait for apiserver health ...
	I1019 12:06:44.838108  356592 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:06:44.842666  356592 system_pods.go:59] 20 kube-system pods found
	I1019 12:06:44.842699  356592 system_pods.go:61] "amd-gpu-device-plugin-h5jpt" [6034192f-2361-4c90-bbe0-6e827369a4ac] Pending
	I1019 12:06:44.842713  356592 system_pods.go:61] "coredns-66bc5c9577-8bhw9" [7cd896cd-6595-4cb3-aed2-5e832e989dca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:06:44.842719  356592 system_pods.go:61] "csi-hostpath-attacher-0" [c1a8fe28-1ecc-4175-8398-3e489d9a4d58] Pending
	I1019 12:06:44.842734  356592 system_pods.go:61] "csi-hostpath-resizer-0" [5df401ce-c486-41d1-bfdf-92872a6c9035] Pending
	I1019 12:06:44.842743  356592 system_pods.go:61] "csi-hostpathplugin-vjzh8" [affa99c6-463a-4e94-8d81-bdd935550bef] Pending
	I1019 12:06:44.842748  356592 system_pods.go:61] "etcd-addons-042725" [b2d23439-2167-44bb-ab3a-d14f888fae78] Running
	I1019 12:06:44.842752  356592 system_pods.go:61] "kindnet-jkhpq" [f72f0a67-6931-4d85-862a-38eeef79cdb3] Running
	I1019 12:06:44.842758  356592 system_pods.go:61] "kube-apiserver-addons-042725" [75f261b7-b4eb-464d-b7d8-5828ef37823e] Running
	I1019 12:06:44.842766  356592 system_pods.go:61] "kube-controller-manager-addons-042725" [d7e3d45c-e0c3-4477-9453-95590f9b40da] Running
	I1019 12:06:44.842772  356592 system_pods.go:61] "kube-ingress-dns-minikube" [db10e618-086c-4c0a-960c-df9ac584bc08] Pending
	I1019 12:06:44.842776  356592 system_pods.go:61] "kube-proxy-8swjm" [84b70270-605f-488d-b1ce-6749279e0c6f] Running
	I1019 12:06:44.842781  356592 system_pods.go:61] "kube-scheduler-addons-042725" [280ae2d8-007c-48bd-81fa-54c164113968] Running
	I1019 12:06:44.842792  356592 system_pods.go:61] "metrics-server-85b7d694d7-m56bv" [954b468e-a39d-4596-b8a9-62f10f5aa910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:06:44.842801  356592 system_pods.go:61] "nvidia-device-plugin-daemonset-ddp7p" [08f99b85-a997-45d0-a756-3960d768dc50] Pending
	I1019 12:06:44.842807  356592 system_pods.go:61] "registry-6b586f9694-98h42" [95130b7b-05dc-4919-a9ab-5159f9e85c82] Pending
	I1019 12:06:44.842818  356592 system_pods.go:61] "registry-creds-764b6fb674-rg7vx" [0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:06:44.842823  356592 system_pods.go:61] "registry-proxy-wlzbz" [172ed291-7498-4487-9cd8-04ca84123237] Pending
	I1019 12:06:44.842829  356592 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bzfmt" [aa5555c5-201a-4550-a1bf-71aae1cf0d22] Pending
	I1019 12:06:44.842834  356592 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qthpd" [400d7aa3-8252-438f-90de-e39187b5de7b] Pending
	I1019 12:06:44.842843  356592 system_pods.go:61] "storage-provisioner" [58acddcb-0271-4830-8593-4be76d171679] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:06:44.842853  356592 system_pods.go:74] duration metric: took 4.738957ms to wait for pod list to return data ...
	I1019 12:06:44.842867  356592 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:06:44.844935  356592 default_sa.go:45] found service account: "default"
	I1019 12:06:44.844958  356592 default_sa.go:55] duration metric: took 2.084243ms for default service account to be created ...
	I1019 12:06:44.844969  356592 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:06:44.852085  356592 system_pods.go:86] 20 kube-system pods found
	I1019 12:06:44.852116  356592 system_pods.go:89] "amd-gpu-device-plugin-h5jpt" [6034192f-2361-4c90-bbe0-6e827369a4ac] Pending
	I1019 12:06:44.852128  356592 system_pods.go:89] "coredns-66bc5c9577-8bhw9" [7cd896cd-6595-4cb3-aed2-5e832e989dca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:06:44.852135  356592 system_pods.go:89] "csi-hostpath-attacher-0" [c1a8fe28-1ecc-4175-8398-3e489d9a4d58] Pending
	I1019 12:06:44.852142  356592 system_pods.go:89] "csi-hostpath-resizer-0" [5df401ce-c486-41d1-bfdf-92872a6c9035] Pending
	I1019 12:06:44.852147  356592 system_pods.go:89] "csi-hostpathplugin-vjzh8" [affa99c6-463a-4e94-8d81-bdd935550bef] Pending
	I1019 12:06:44.852151  356592 system_pods.go:89] "etcd-addons-042725" [b2d23439-2167-44bb-ab3a-d14f888fae78] Running
	I1019 12:06:44.852157  356592 system_pods.go:89] "kindnet-jkhpq" [f72f0a67-6931-4d85-862a-38eeef79cdb3] Running
	I1019 12:06:44.852171  356592 system_pods.go:89] "kube-apiserver-addons-042725" [75f261b7-b4eb-464d-b7d8-5828ef37823e] Running
	I1019 12:06:44.852177  356592 system_pods.go:89] "kube-controller-manager-addons-042725" [d7e3d45c-e0c3-4477-9453-95590f9b40da] Running
	I1019 12:06:44.852190  356592 system_pods.go:89] "kube-ingress-dns-minikube" [db10e618-086c-4c0a-960c-df9ac584bc08] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:06:44.852197  356592 system_pods.go:89] "kube-proxy-8swjm" [84b70270-605f-488d-b1ce-6749279e0c6f] Running
	I1019 12:06:44.852204  356592 system_pods.go:89] "kube-scheduler-addons-042725" [280ae2d8-007c-48bd-81fa-54c164113968] Running
	I1019 12:06:44.852215  356592 system_pods.go:89] "metrics-server-85b7d694d7-m56bv" [954b468e-a39d-4596-b8a9-62f10f5aa910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:06:44.852224  356592 system_pods.go:89] "nvidia-device-plugin-daemonset-ddp7p" [08f99b85-a997-45d0-a756-3960d768dc50] Pending
	I1019 12:06:44.852230  356592 system_pods.go:89] "registry-6b586f9694-98h42" [95130b7b-05dc-4919-a9ab-5159f9e85c82] Pending
	I1019 12:06:44.852239  356592 system_pods.go:89] "registry-creds-764b6fb674-rg7vx" [0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:06:44.852247  356592 system_pods.go:89] "registry-proxy-wlzbz" [172ed291-7498-4487-9cd8-04ca84123237] Pending
	I1019 12:06:44.852252  356592 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bzfmt" [aa5555c5-201a-4550-a1bf-71aae1cf0d22] Pending
	I1019 12:06:44.852260  356592 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qthpd" [400d7aa3-8252-438f-90de-e39187b5de7b] Pending
	I1019 12:06:44.852270  356592 system_pods.go:89] "storage-provisioner" [58acddcb-0271-4830-8593-4be76d171679] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:06:44.852293  356592 retry.go:31] will retry after 221.120739ms: missing components: kube-dns
	I1019 12:06:44.866835  356592 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 12:06:44.866865  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:44.866845  356592 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 12:06:44.866884  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:44.910409  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:45.081281  356592 system_pods.go:86] 20 kube-system pods found
	I1019 12:06:45.081315  356592 system_pods.go:89] "amd-gpu-device-plugin-h5jpt" [6034192f-2361-4c90-bbe0-6e827369a4ac] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 12:06:45.081322  356592 system_pods.go:89] "coredns-66bc5c9577-8bhw9" [7cd896cd-6595-4cb3-aed2-5e832e989dca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:06:45.081331  356592 system_pods.go:89] "csi-hostpath-attacher-0" [c1a8fe28-1ecc-4175-8398-3e489d9a4d58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:06:45.081337  356592 system_pods.go:89] "csi-hostpath-resizer-0" [5df401ce-c486-41d1-bfdf-92872a6c9035] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:06:45.081342  356592 system_pods.go:89] "csi-hostpathplugin-vjzh8" [affa99c6-463a-4e94-8d81-bdd935550bef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:06:45.081346  356592 system_pods.go:89] "etcd-addons-042725" [b2d23439-2167-44bb-ab3a-d14f888fae78] Running
	I1019 12:06:45.081350  356592 system_pods.go:89] "kindnet-jkhpq" [f72f0a67-6931-4d85-862a-38eeef79cdb3] Running
	I1019 12:06:45.081354  356592 system_pods.go:89] "kube-apiserver-addons-042725" [75f261b7-b4eb-464d-b7d8-5828ef37823e] Running
	I1019 12:06:45.081357  356592 system_pods.go:89] "kube-controller-manager-addons-042725" [d7e3d45c-e0c3-4477-9453-95590f9b40da] Running
	I1019 12:06:45.081367  356592 system_pods.go:89] "kube-ingress-dns-minikube" [db10e618-086c-4c0a-960c-df9ac584bc08] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:06:45.081372  356592 system_pods.go:89] "kube-proxy-8swjm" [84b70270-605f-488d-b1ce-6749279e0c6f] Running
	I1019 12:06:45.081378  356592 system_pods.go:89] "kube-scheduler-addons-042725" [280ae2d8-007c-48bd-81fa-54c164113968] Running
	I1019 12:06:45.081385  356592 system_pods.go:89] "metrics-server-85b7d694d7-m56bv" [954b468e-a39d-4596-b8a9-62f10f5aa910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:06:45.081392  356592 system_pods.go:89] "nvidia-device-plugin-daemonset-ddp7p" [08f99b85-a997-45d0-a756-3960d768dc50] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:06:45.081399  356592 system_pods.go:89] "registry-6b586f9694-98h42" [95130b7b-05dc-4919-a9ab-5159f9e85c82] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:06:45.081407  356592 system_pods.go:89] "registry-creds-764b6fb674-rg7vx" [0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:06:45.081441  356592 system_pods.go:89] "registry-proxy-wlzbz" [172ed291-7498-4487-9cd8-04ca84123237] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:06:45.081458  356592 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bzfmt" [aa5555c5-201a-4550-a1bf-71aae1cf0d22] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:06:45.081468  356592 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qthpd" [400d7aa3-8252-438f-90de-e39187b5de7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:06:45.081482  356592 system_pods.go:89] "storage-provisioner" [58acddcb-0271-4830-8593-4be76d171679] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:06:45.081500  356592 retry.go:31] will retry after 243.207498ms: missing components: kube-dns
	I1019 12:06:45.181111  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:45.329158  356592 system_pods.go:86] 20 kube-system pods found
	I1019 12:06:45.329193  356592 system_pods.go:89] "amd-gpu-device-plugin-h5jpt" [6034192f-2361-4c90-bbe0-6e827369a4ac] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 12:06:45.329199  356592 system_pods.go:89] "coredns-66bc5c9577-8bhw9" [7cd896cd-6595-4cb3-aed2-5e832e989dca] Running
	I1019 12:06:45.329207  356592 system_pods.go:89] "csi-hostpath-attacher-0" [c1a8fe28-1ecc-4175-8398-3e489d9a4d58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 12:06:45.329212  356592 system_pods.go:89] "csi-hostpath-resizer-0" [5df401ce-c486-41d1-bfdf-92872a6c9035] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 12:06:45.329218  356592 system_pods.go:89] "csi-hostpathplugin-vjzh8" [affa99c6-463a-4e94-8d81-bdd935550bef] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 12:06:45.329222  356592 system_pods.go:89] "etcd-addons-042725" [b2d23439-2167-44bb-ab3a-d14f888fae78] Running
	I1019 12:06:45.329225  356592 system_pods.go:89] "kindnet-jkhpq" [f72f0a67-6931-4d85-862a-38eeef79cdb3] Running
	I1019 12:06:45.329229  356592 system_pods.go:89] "kube-apiserver-addons-042725" [75f261b7-b4eb-464d-b7d8-5828ef37823e] Running
	I1019 12:06:45.329232  356592 system_pods.go:89] "kube-controller-manager-addons-042725" [d7e3d45c-e0c3-4477-9453-95590f9b40da] Running
	I1019 12:06:45.329238  356592 system_pods.go:89] "kube-ingress-dns-minikube" [db10e618-086c-4c0a-960c-df9ac584bc08] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 12:06:45.329241  356592 system_pods.go:89] "kube-proxy-8swjm" [84b70270-605f-488d-b1ce-6749279e0c6f] Running
	I1019 12:06:45.329245  356592 system_pods.go:89] "kube-scheduler-addons-042725" [280ae2d8-007c-48bd-81fa-54c164113968] Running
	I1019 12:06:45.329249  356592 system_pods.go:89] "metrics-server-85b7d694d7-m56bv" [954b468e-a39d-4596-b8a9-62f10f5aa910] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 12:06:45.329258  356592 system_pods.go:89] "nvidia-device-plugin-daemonset-ddp7p" [08f99b85-a997-45d0-a756-3960d768dc50] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 12:06:45.329263  356592 system_pods.go:89] "registry-6b586f9694-98h42" [95130b7b-05dc-4919-a9ab-5159f9e85c82] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 12:06:45.329273  356592 system_pods.go:89] "registry-creds-764b6fb674-rg7vx" [0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 12:06:45.329278  356592 system_pods.go:89] "registry-proxy-wlzbz" [172ed291-7498-4487-9cd8-04ca84123237] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 12:06:45.329285  356592 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bzfmt" [aa5555c5-201a-4550-a1bf-71aae1cf0d22] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:06:45.329291  356592 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qthpd" [400d7aa3-8252-438f-90de-e39187b5de7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 12:06:45.329295  356592 system_pods.go:89] "storage-provisioner" [58acddcb-0271-4830-8593-4be76d171679] Running
	I1019 12:06:45.329302  356592 system_pods.go:126] duration metric: took 484.327821ms to wait for k8s-apps to be running ...
	I1019 12:06:45.329312  356592 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:06:45.329353  356592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:06:45.342281  356592 system_svc.go:56] duration metric: took 12.957622ms WaitForService to wait for kubelet
	I1019 12:06:45.342310  356592 kubeadm.go:586] duration metric: took 42.047112038s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:06:45.342330  356592 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:06:45.345028  356592 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:06:45.345059  356592 node_conditions.go:123] node cpu capacity is 8
	I1019 12:06:45.345078  356592 node_conditions.go:105] duration metric: took 2.74248ms to run NodePressure ...
	I1019 12:06:45.345089  356592 start.go:241] waiting for startup goroutines ...
	I1019 12:06:45.368073  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:45.368186  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:45.409816  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:45.673513  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:45.772576  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:45.868572  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:45.868715  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:45.910604  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:46.172277  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:46.369204  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:46.369258  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:46.411583  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:46.511868  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:46.511910  356592 retry.go:31] will retry after 11.467531854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:46.673444  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:46.867924  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:46.868088  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:46.910161  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:47.173132  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:47.368894  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:47.368989  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:47.410235  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:47.672512  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:47.867545  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:47.867941  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:47.911231  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:48.173619  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:48.367844  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:48.367982  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:48.410226  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:48.672609  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:48.867368  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:48.867467  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:48.910519  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:49.172509  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:49.367663  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:49.367734  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:49.410539  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:49.673707  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:49.868558  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:49.870058  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:49.911832  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:50.174079  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:50.368710  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:50.369110  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:50.410695  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:50.673154  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:50.868905  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:50.869013  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:50.910245  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:51.172275  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:51.368010  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:51.368238  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:51.410316  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:51.672212  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:51.867537  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:51.867604  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:51.911076  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:52.173099  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:52.368532  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:52.368566  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:52.411138  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:52.687128  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:52.868588  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:52.868707  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:52.910852  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:53.173261  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:53.368780  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:53.368851  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:53.410085  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:53.673271  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:53.867356  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:53.867530  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:53.910370  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:54.172267  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:54.368255  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:54.368360  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:54.411402  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:54.671930  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:54.868458  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:54.868478  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:54.910031  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:55.172662  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:55.368472  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:55.368671  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:55.411899  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:55.672984  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:55.869004  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:55.869135  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:55.911359  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:56.172107  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:56.368581  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:56.368593  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:56.411168  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:56.672363  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:56.867859  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:56.867932  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:56.910763  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:57.173070  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:57.368525  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:57.368620  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:57.410296  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:57.671905  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:57.868514  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:57.868545  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:57.910487  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:57.979646  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:06:58.172910  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:58.368650  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:58.368801  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:58.410896  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1019 12:06:58.654949  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:58.654983  356592 retry.go:31] will retry after 25.020490151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:06:58.673027  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:58.868184  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:58.868369  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:58.910444  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:59.172158  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:59.368159  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:59.368269  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:59.409782  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:06:59.673369  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:06:59.867630  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:06:59.867801  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:06:59.910773  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:00.172703  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:00.370204  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:00.370235  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:00.409909  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:00.673413  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:00.867969  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:00.868276  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:00.910369  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:01.171702  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:01.368219  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:01.368275  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:01.464591  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:01.672622  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:01.867895  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:01.867915  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:01.914011  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:02.172398  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:02.366965  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:02.367234  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:02.410542  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:02.672036  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:02.867865  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:02.867946  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:02.909434  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:03.172303  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:03.367949  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:03.368031  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:03.409639  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:03.672386  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:03.867642  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:03.867729  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:03.911132  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:04.173272  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:04.371217  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:04.371986  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:04.410531  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:04.672534  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:04.867994  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:04.868178  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:04.911049  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:05.173898  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:05.373838  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:05.374708  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:05.582689  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:05.678267  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:05.867912  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:05.867952  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:05.910762  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:06.172642  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:06.368688  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:06.368804  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:06.411337  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:06.673543  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:06.867650  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:06.867886  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:06.911171  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:07.173043  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:07.367221  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:07.367222  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:07.427491  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:07.672882  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:07.869040  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:07.869112  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:07.911119  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:08.173034  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:08.368886  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:08.373338  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:08.474224  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:08.673464  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:08.867734  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:08.867985  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:08.911098  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:09.173018  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:09.367991  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:09.368244  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:09.410127  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:09.673452  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:09.867738  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:09.867920  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:09.909955  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:10.173448  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:10.367562  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:10.367632  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:10.410607  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:10.672528  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:10.867605  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:10.867637  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:10.910146  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:11.173006  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:11.368078  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:11.368119  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:11.411213  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:11.673366  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:11.867570  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:11.867627  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:11.910182  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:12.172159  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:12.368327  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:12.368392  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:12.410617  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:12.673005  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:12.868165  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:12.868282  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:12.909760  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:13.172581  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:13.368110  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:13.368207  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:13.410177  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:13.673922  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:13.870569  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:13.871144  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:13.910292  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:14.172370  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:14.367770  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:14.367853  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:14.409999  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:14.673143  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:14.868924  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:14.870198  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:14.911304  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:15.173730  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:15.368331  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 12:07:15.368390  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:15.411601  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:15.674134  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:15.868506  356592 kapi.go:107] duration metric: took 1m10.504277719s to wait for kubernetes.io/minikube-addons=registry ...
	I1019 12:07:15.868726  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:15.910876  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:16.172818  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:16.367822  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:16.410695  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:16.672650  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:16.868258  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:16.910192  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:17.171797  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:17.368680  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:17.411061  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:17.673404  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:17.867792  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:17.910937  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:18.247769  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:18.376273  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:18.410119  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:18.672012  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:18.868763  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:18.910697  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:19.173298  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:19.369293  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:19.410511  356592 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 12:07:19.673640  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:19.876015  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:19.910023  356592 kapi.go:107] duration metric: took 1m15.003240572s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1019 12:07:20.173646  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:20.368297  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:20.671972  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:20.867899  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:21.172391  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:21.367448  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:21.672388  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:21.867276  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:22.172261  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:22.368582  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:22.672932  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:22.868546  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:23.174103  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 12:07:23.369105  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:23.671952  356592 kapi.go:107] duration metric: took 1m12.002788104s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1019 12:07:23.673648  356592 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-042725 cluster.
	I1019 12:07:23.674875  356592 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1019 12:07:23.675970  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 12:07:23.677810  356592 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1019 12:07:23.867788  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 12:07:24.296115  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:24.296151  356592 retry.go:31] will retry after 35.866657781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 12:07:24.368453  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:24.868140  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:25.375783  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:25.868377  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:26.368216  356592 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 12:07:26.868896  356592 kapi.go:107] duration metric: took 1m21.504666416s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1019 12:08:00.164742  356592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1019 12:08:00.691743  356592 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1019 12:08:00.691876  356592 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1019 12:08:00.693835  356592 out.go:179] * Enabled addons: registry-creds, cloud-spanner, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1019 12:08:00.695026  356592 addons.go:514] duration metric: took 1m57.399772156s for enable addons: enabled=[registry-creds cloud-spanner nvidia-device-plugin amd-gpu-device-plugin ingress-dns storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1019 12:08:00.695078  356592 start.go:246] waiting for cluster config update ...
	I1019 12:08:00.695103  356592 start.go:255] writing updated cluster config ...
	I1019 12:08:00.695459  356592 ssh_runner.go:195] Run: rm -f paused
	I1019 12:08:00.699243  356592 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:08:00.702784  356592 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8bhw9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:00.706944  356592 pod_ready.go:94] pod "coredns-66bc5c9577-8bhw9" is "Ready"
	I1019 12:08:00.706964  356592 pod_ready.go:86] duration metric: took 4.159338ms for pod "coredns-66bc5c9577-8bhw9" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:00.708927  356592 pod_ready.go:83] waiting for pod "etcd-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:00.712273  356592 pod_ready.go:94] pod "etcd-addons-042725" is "Ready"
	I1019 12:08:00.712290  356592 pod_ready.go:86] duration metric: took 3.34608ms for pod "etcd-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:00.713954  356592 pod_ready.go:83] waiting for pod "kube-apiserver-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:00.717176  356592 pod_ready.go:94] pod "kube-apiserver-addons-042725" is "Ready"
	I1019 12:08:00.717197  356592 pod_ready.go:86] duration metric: took 3.224376ms for pod "kube-apiserver-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:00.718917  356592 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:01.102888  356592 pod_ready.go:94] pod "kube-controller-manager-addons-042725" is "Ready"
	I1019 12:08:01.102915  356592 pod_ready.go:86] duration metric: took 383.979363ms for pod "kube-controller-manager-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:01.303289  356592 pod_ready.go:83] waiting for pod "kube-proxy-8swjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:01.703118  356592 pod_ready.go:94] pod "kube-proxy-8swjm" is "Ready"
	I1019 12:08:01.703143  356592 pod_ready.go:86] duration metric: took 399.824693ms for pod "kube-proxy-8swjm" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:01.903371  356592 pod_ready.go:83] waiting for pod "kube-scheduler-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:02.302840  356592 pod_ready.go:94] pod "kube-scheduler-addons-042725" is "Ready"
	I1019 12:08:02.302869  356592 pod_ready.go:86] duration metric: took 399.467884ms for pod "kube-scheduler-addons-042725" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:08:02.302887  356592 pod_ready.go:40] duration metric: took 1.603615654s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:08:02.347940  356592 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:08:02.350311  356592 out.go:179] * Done! kubectl is now configured to use "addons-042725" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:07:58 addons-042725 crio[772]: time="2025-10-19T12:07:58.176207601Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:07:58 addons-042725 crio[772]: time="2025-10-19T12:07:58.176260037Z" level=info msg="Removed pod sandbox: 4d15c75874e603aab1def409a21816129fb85e9524c08fba7a3cf73f2605c22a" id=5a3e7815-eb14-47cc-9db4-d679edca24eb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.1397117Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fd8d3c54-c1b0-4e12-9ebd-e8e106d63626 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.139800677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.145548787Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:34b6bbac60f50e217baf5835b34378f3673a1d218a27e2915b75f98967b1aa2e UID:2c8198e2-f656-4274-b959-45650f1182b1 NetNS:/var/run/netns/abd9ec76-7751-43a8-a152-a29622fd2d02 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005c6700}] Aliases:map[]}"
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.145584968Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.155328503Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:34b6bbac60f50e217baf5835b34378f3673a1d218a27e2915b75f98967b1aa2e UID:2c8198e2-f656-4274-b959-45650f1182b1 NetNS:/var/run/netns/abd9ec76-7751-43a8-a152-a29622fd2d02 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005c6700}] Aliases:map[]}"
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.15546575Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.156240645Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.157284117Z" level=info msg="Ran pod sandbox 34b6bbac60f50e217baf5835b34378f3673a1d218a27e2915b75f98967b1aa2e with infra container: default/busybox/POD" id=fd8d3c54-c1b0-4e12-9ebd-e8e106d63626 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.158560034Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a1844642-7ddc-470f-bcc3-ddd1779b7704 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.158697216Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a1844642-7ddc-470f-bcc3-ddd1779b7704 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.158730311Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a1844642-7ddc-470f-bcc3-ddd1779b7704 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.159392731Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=54795a43-290e-4173-bf7b-4191da2de2f1 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.161045909Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.910253648Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=54795a43-290e-4173-bf7b-4191da2de2f1 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.910880452Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ed3dad57-767d-46b4-a620-3df14adcd7f7 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.912208656Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ac310b05-4343-42c3-a3ba-da438d383a53 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.915910401Z" level=info msg="Creating container: default/busybox/busybox" id=5d27f0dd-a9da-4bdd-9107-00e5a65653e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.916521414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.921596622Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.92201221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.966447114Z" level=info msg="Created container 84e8f30032010e29b587a54000df633bd5f844ad16df59fd4bfd8820a85173f8: default/busybox/busybox" id=5d27f0dd-a9da-4bdd-9107-00e5a65653e5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.967073835Z" level=info msg="Starting container: 84e8f30032010e29b587a54000df633bd5f844ad16df59fd4bfd8820a85173f8" id=434bfac9-9d5e-4ff7-953c-7b55f9f5aa01 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:08:03 addons-042725 crio[772]: time="2025-10-19T12:08:03.96921543Z" level=info msg="Started container" PID=6499 containerID=84e8f30032010e29b587a54000df633bd5f844ad16df59fd4bfd8820a85173f8 description=default/busybox/busybox id=434bfac9-9d5e-4ff7-953c-7b55f9f5aa01 name=/runtime.v1.RuntimeService/StartContainer sandboxID=34b6bbac60f50e217baf5835b34378f3673a1d218a27e2915b75f98967b1aa2e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	84e8f30032010       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   34b6bbac60f50       busybox                                     default
	3ade97065f11c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          45 seconds ago       Running             csi-snapshotter                          0                   ce41d26d2c03f       csi-hostpathplugin-vjzh8                    kube-system
	fb7af3710e740       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          46 seconds ago       Running             csi-provisioner                          0                   ce41d26d2c03f       csi-hostpathplugin-vjzh8                    kube-system
	a97ff90dab8de       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            47 seconds ago       Running             liveness-probe                           0                   ce41d26d2c03f       csi-hostpathplugin-vjzh8                    kube-system
	2a1f70eb7742e       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           47 seconds ago       Running             hostpath                                 0                   ce41d26d2c03f       csi-hostpathplugin-vjzh8                    kube-system
	bcf2e921d1fbc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 48 seconds ago       Running             gcp-auth                                 0                   4ba69035e1c41       gcp-auth-78565c9fb4-vcs5x                   gcp-auth
	48fc5eed7d5dd       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                50 seconds ago       Running             node-driver-registrar                    0                   ce41d26d2c03f       csi-hostpathplugin-vjzh8                    kube-system
	e6e0bbb22679d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            50 seconds ago       Running             gadget                                   0                   964d5efac1d49       gadget-tfffr                                gadget
	75028df70de03       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             53 seconds ago       Running             controller                               0                   39ee0b03cf8da       ingress-nginx-controller-675c5ddd98-jgc9g   ingress-nginx
	c01ae707db89e       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              57 seconds ago       Running             registry-proxy                           0                   9250fa8992155       registry-proxy-wlzbz                        kube-system
	ffff44fc42fb1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   58 seconds ago       Running             csi-external-health-monitor-controller   0                   ce41d26d2c03f       csi-hostpathplugin-vjzh8                    kube-system
	00707c3c4bab5       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     59 seconds ago       Running             amd-gpu-device-plugin                    0                   181810b6bbe83       amd-gpu-device-plugin-h5jpt                 kube-system
	7e3eb26fc0ee1       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   902e355b125f7       nvidia-device-plugin-daemonset-ddp7p        kube-system
	1be6499ceead7       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   8a84d25454c41       snapshot-controller-7d9fbc56b8-qthpd        kube-system
	286cb01381b0e       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   f153b7650888c       csi-hostpath-attacher-0                     kube-system
	e74d01dfb7b1e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   7e5bce28fc5ab       csi-hostpath-resizer-0                      kube-system
	fbeda2203e379       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              patch                                    0                   c032311978789       ingress-nginx-admission-patch-92jcm         ingress-nginx
	15f3c32c2c116       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   7f9fa23f57690       snapshot-controller-7d9fbc56b8-bzfmt        kube-system
	b4cbe25106ffb       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   1f0b63d9c52f0       ingress-nginx-admission-create-p6q55        ingress-nginx
	fde2b1c07a1da       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   df43b2dff1f06       registry-6b586f9694-98h42                   kube-system
	069419553a5ee       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   5f8e570a7d824       yakd-dashboard-5ff678cb9-8kxtn              yakd-dashboard
	0f9b8df5b59c4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   86ea4623eebec       local-path-provisioner-648f6765c9-4xrmm     local-path-storage
	2f814989d8185       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   5529ff02db0a9       kube-ingress-dns-minikube                   kube-system
	3b868a98638bd       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   c27b011129b0d       metrics-server-85b7d694d7-m56bv             kube-system
	2eb361a243fb3       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   d420fc45ea86e       cloud-spanner-emulator-86bd5cbb97-blgzl     default
	1089a2c2700f2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   7d03bf28ac9fd       coredns-66bc5c9577-8bhw9                    kube-system
	7a4e144a7b1ee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   9903e5a1a73d2       storage-provisioner                         kube-system
	392500e9aeeb9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             2 minutes ago        Running             kube-proxy                               0                   47286d4dc497e       kube-proxy-8swjm                            kube-system
	cde6c4794a9e2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             2 minutes ago        Running             kindnet-cni                              0                   cbc7045394aed       kindnet-jkhpq                               kube-system
	396948a693fd8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   724f045c92738       kube-controller-manager-addons-042725       kube-system
	09349ccfaf4c0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   a2aadcc280058       kube-apiserver-addons-042725                kube-system
	ae636ce017962       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   0b3da88054ae6       kube-scheduler-addons-042725                kube-system
	0d69b9d0659dd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   3048df84bc61e       etcd-addons-042725                          kube-system
	
	
	==> coredns [1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0] <==
	[INFO] 10.244.0.16:44476 - 31562 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.002980684s
	[INFO] 10.244.0.16:33172 - 25904 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000057806s
	[INFO] 10.244.0.16:33172 - 25561 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000067148s
	[INFO] 10.244.0.16:57603 - 62449 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.0000922s
	[INFO] 10.244.0.16:57603 - 61972 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000134971s
	[INFO] 10.244.0.16:36763 - 9050 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000047126s
	[INFO] 10.244.0.16:36763 - 9314 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000081806s
	[INFO] 10.244.0.16:60747 - 31049 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122212s
	[INFO] 10.244.0.16:60747 - 30678 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160133s
	[INFO] 10.244.0.22:38903 - 8512 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000223911s
	[INFO] 10.244.0.22:35530 - 65525 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00031838s
	[INFO] 10.244.0.22:41676 - 33906 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133011s
	[INFO] 10.244.0.22:42565 - 56685 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136755s
	[INFO] 10.244.0.22:60539 - 4510 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111407s
	[INFO] 10.244.0.22:35671 - 3581 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135171s
	[INFO] 10.244.0.22:55490 - 47327 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004096202s
	[INFO] 10.244.0.22:46489 - 50619 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004762693s
	[INFO] 10.244.0.22:44525 - 51291 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005283155s
	[INFO] 10.244.0.22:55443 - 58125 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005644514s
	[INFO] 10.244.0.22:39411 - 4710 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005182971s
	[INFO] 10.244.0.22:43205 - 48734 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006065197s
	[INFO] 10.244.0.22:45996 - 47057 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005126078s
	[INFO] 10.244.0.22:42110 - 9949 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005688927s
	[INFO] 10.244.0.22:44348 - 22224 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00205042s
	[INFO] 10.244.0.22:41947 - 38479 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002122304s
	
	
	==> describe nodes <==
	Name:               addons-042725
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-042725
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=addons-042725
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_05_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-042725
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-042725"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:05:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-042725
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:08:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:07:49 +0000   Sun, 19 Oct 2025 12:05:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:07:49 +0000   Sun, 19 Oct 2025 12:05:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:07:49 +0000   Sun, 19 Oct 2025 12:05:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:07:49 +0000   Sun, 19 Oct 2025 12:06:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-042725
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                44f09d92-1e2d-487d-b4c4-92e6e5b92b49
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-86bd5cbb97-blgzl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  gadget                      gadget-tfffr                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  gcp-auth                    gcp-auth-78565c9fb4-vcs5x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-jgc9g    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m7s
	  kube-system                 amd-gpu-device-plugin-h5jpt                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-8bhw9                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m8s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 csi-hostpathplugin-vjzh8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 etcd-addons-042725                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m15s
	  kube-system                 kindnet-jkhpq                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m8s
	  kube-system                 kube-apiserver-addons-042725                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-addons-042725        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-proxy-8swjm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-scheduler-addons-042725                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 metrics-server-85b7d694d7-m56bv              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m7s
	  kube-system                 nvidia-device-plugin-daemonset-ddp7p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 registry-6b586f9694-98h42                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 registry-creds-764b6fb674-rg7vx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 registry-proxy-wlzbz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 snapshot-controller-7d9fbc56b8-bzfmt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 snapshot-controller-7d9fbc56b8-qthpd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  local-path-storage          local-path-provisioner-648f6765c9-4xrmm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8kxtn               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m6s   kube-proxy       
	  Normal  Starting                 2m13s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m13s  kubelet          Node addons-042725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s  kubelet          Node addons-042725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s  kubelet          Node addons-042725 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m9s   node-controller  Node addons-042725 event: Registered Node addons-042725 in Controller
	  Normal  NodeReady                87s    kubelet          Node addons-042725 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000024] ll header: 00000000: 5e d9 57 cd 8c ce 86 08 ee e4 43 27 08 00
	[Oct19 11:58] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e 9d ba 29 a8 94 08 06
	[ +13.723956] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 51 7b b8 90 7d 08 06
	[  +0.000467] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 9d ba 29 a8 94 08 06
	[  +9.177153] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 26 79 3e df 90 19 08 06
	[Oct19 11:59] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 bc e1 50 25 8b 08 06
	[  +3.234968] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 66 77 4b 59 2c 22 08 06
	[  +5.199947] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 16 23 d2 58 c8 f6 08 06
	[  +0.000408] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 26 79 3e df 90 19 08 06
	[ +17.300432] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 92 88 ab 43 3b 3e 08 06
	[  +0.000329] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 66 77 4b 59 2c 22 08 06
	[ +22.973053] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 31 d3 aa 8a bd 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 bc e1 50 25 8b 08 06
	
	
	==> etcd [0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e] <==
	{"level":"warn","ts":"2025-10-19T12:05:55.212611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.219745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.226406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.234473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.241971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.257090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.263180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.269699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:05:55.315641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:06:05.776481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:06:05.782531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:06:32.723483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:06:32.729951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:06:32.744209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:06:32.751030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:07:05.580914Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.612483ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T12:07:05.581028Z","caller":"traceutil/trace.go:172","msg":"trace[1143347139] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"171.748594ms","start":"2025-10-19T12:07:05.409262Z","end":"2025-10-19T12:07:05.581010Z","steps":["trace[1143347139] 'range keys from in-memory index tree'  (duration: 171.536437ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:07:05.581068Z","caller":"traceutil/trace.go:172","msg":"trace[258740765] transaction","detail":"{read_only:false; response_revision:1070; number_of_response:1; }","duration":"139.48538ms","start":"2025-10-19T12:07:05.441566Z","end":"2025-10-19T12:07:05.581052Z","steps":["trace[258740765] 'process raft request'  (duration: 85.921607ms)","trace[258740765] 'compare'  (duration: 53.247314ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T12:07:05.581207Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"199.889867ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/gcp-auth-certs-patch-cjpqf\" limit:1 ","response":"range_response_count:1 size:4256"}
	{"level":"info","ts":"2025-10-19T12:07:05.581889Z","caller":"traceutil/trace.go:172","msg":"trace[605640462] range","detail":"{range_begin:/registry/pods/gcp-auth/gcp-auth-certs-patch-cjpqf; range_end:; response_count:1; response_revision:1069; }","duration":"200.573383ms","start":"2025-10-19T12:07:05.381298Z","end":"2025-10-19T12:07:05.581871Z","steps":["trace[605640462] 'range keys from in-memory index tree'  (duration: 199.562079ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:07:05.615028Z","caller":"traceutil/trace.go:172","msg":"trace[2127932943] transaction","detail":"{read_only:false; response_revision:1071; number_of_response:1; }","duration":"158.787077ms","start":"2025-10-19T12:07:05.456221Z","end":"2025-10-19T12:07:05.615008Z","steps":["trace[2127932943] 'process raft request'  (duration: 158.668886ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:07:05.801019Z","caller":"traceutil/trace.go:172","msg":"trace[82180297] transaction","detail":"{read_only:false; response_revision:1075; number_of_response:1; }","duration":"118.151801ms","start":"2025-10-19T12:07:05.682849Z","end":"2025-10-19T12:07:05.801001Z","steps":["trace[82180297] 'process raft request'  (duration: 118.101294ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:07:05.801157Z","caller":"traceutil/trace.go:172","msg":"trace[448752863] transaction","detail":"{read_only:false; response_revision:1074; number_of_response:1; }","duration":"160.420613ms","start":"2025-10-19T12:07:05.640727Z","end":"2025-10-19T12:07:05.801147Z","steps":["trace[448752863] 'process raft request'  (duration: 78.170677ms)","trace[448752863] 'compare'  (duration: 81.900574ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T12:07:07.361013Z","caller":"traceutil/trace.go:172","msg":"trace[996103451] transaction","detail":"{read_only:false; response_revision:1079; number_of_response:1; }","duration":"155.396919ms","start":"2025-10-19T12:07:07.205597Z","end":"2025-10-19T12:07:07.360993Z","steps":["trace[996103451] 'process raft request'  (duration: 155.27591ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:07:18.246372Z","caller":"traceutil/trace.go:172","msg":"trace[790859880] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"186.670945ms","start":"2025-10-19T12:07:18.059681Z","end":"2025-10-19T12:07:18.246352Z","steps":["trace[790859880] 'process raft request'  (duration: 124.603812ms)","trace[790859880] 'compare'  (duration: 61.893846ms)"],"step_count":2}
	
	
	==> gcp-auth [bcf2e921d1fbc57c1dc8f9141610578d1ee199190d26ea348e19f74933486229] <==
	2025/10/19 12:07:22 GCP Auth Webhook started!
	2025/10/19 12:08:02 Ready to marshal response ...
	2025/10/19 12:08:02 Ready to write response ...
	2025/10/19 12:08:02 Ready to marshal response ...
	2025/10/19 12:08:02 Ready to write response ...
	2025/10/19 12:08:02 Ready to marshal response ...
	2025/10/19 12:08:02 Ready to write response ...
	
	
	==> kernel <==
	 12:08:11 up  1:50,  0 user,  load average: 1.68, 1.83, 1.92
	Linux addons-042725 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1] <==
	E1019 12:06:34.586887       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 12:06:34.588018       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 12:06:34.662495       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 12:06:34.663673       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1019 12:06:36.183297       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:06:36.183325       1 metrics.go:72] Registering metrics
	I1019 12:06:36.183388       1 controller.go:711] "Syncing nftables rules"
	I1019 12:06:44.590276       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:06:44.590340       1 main.go:301] handling current node
	I1019 12:06:54.584461       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:06:54.584566       1 main.go:301] handling current node
	I1019 12:07:04.583556       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:07:04.583598       1 main.go:301] handling current node
	I1019 12:07:14.584631       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:07:14.584668       1 main.go:301] handling current node
	I1019 12:07:24.584393       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:07:24.584545       1 main.go:301] handling current node
	I1019 12:07:34.585582       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:07:34.585622       1 main.go:301] handling current node
	I1019 12:07:44.587159       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:07:44.587225       1 main.go:301] handling current node
	I1019 12:07:54.586492       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:07:54.586520       1 main.go:301] handling current node
	I1019 12:08:04.583526       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:08:04.583557       1 main.go:301] handling current node
	
	
	==> kube-apiserver [09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec] <==
	W1019 12:06:50.688729       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:06:50.690070       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.154.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.154.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.154.125:443: connect: connection refused" logger="UnhandledError"
	E1019 12:06:50.690169       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1019 12:06:50.690828       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.154.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.154.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.154.125:443: connect: connection refused" logger="UnhandledError"
	W1019 12:06:51.692147       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:06:51.692189       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1019 12:06:51.692201       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1019 12:06:51.692158       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:06:51.692270       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1019 12:06:51.693390       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1019 12:06:55.701047       1 handler_proxy.go:99] no RequestInfo found in the context
	E1019 12:06:55.701077       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.154.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.154.125:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	E1019 12:06:55.701169       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1019 12:06:55.718256       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1019 12:08:09.973499       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48254: use of closed network connection
	E1019 12:08:10.120394       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:48288: use of closed network connection
	
	
	==> kube-controller-manager [396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554] <==
	I1019 12:06:02.708322       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 12:06:02.708368       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 12:06:02.708596       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 12:06:02.709368       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 12:06:02.709442       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 12:06:02.709442       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:06:02.709451       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:06:02.709473       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 12:06:02.709524       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 12:06:02.709525       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 12:06:02.711721       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 12:06:02.712915       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:06:02.712915       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:06:02.717166       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:06:02.717179       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 12:06:02.723388       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 12:06:02.727647       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1019 12:06:32.717735       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1019 12:06:32.717877       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1019 12:06:32.717926       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1019 12:06:32.734370       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1019 12:06:32.738130       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1019 12:06:32.818801       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:06:32.839013       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:06:47.712982       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963] <==
	I1019 12:06:04.167664       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:06:04.486778       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:06:04.589534       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:06:04.589576       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 12:06:04.589656       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:06:04.699861       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:06:04.699933       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:06:04.707709       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:06:04.708895       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:06:04.709236       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:06:04.710831       1 config.go:309] "Starting node config controller"
	I1019 12:06:04.710892       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:06:04.711276       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:06:04.713655       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:06:04.711370       1 config.go:200] "Starting service config controller"
	I1019 12:06:04.715700       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:06:04.711454       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:06:04.715964       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:06:04.811187       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:06:04.816532       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:06:04.817594       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:06:04.818762       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351] <==
	I1019 12:05:55.879920       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:05:55.880891       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1019 12:05:55.881082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 12:05:55.881237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:05:55.881453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:05:55.882413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:05:55.882466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:05:55.882625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:05:55.882660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:05:55.882754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:05:55.882834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:05:55.882890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:05:55.882892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:05:55.883187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:05:55.883222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:05:55.883282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:05:55.883378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:05:55.883706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 12:05:55.883776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:05:55.883917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:05:55.883987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:05:56.688486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:05:56.690481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 12:05:56.842963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1019 12:05:59.680577       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:07:12 addons-042725 kubelet[1274]: I1019 12:07:12.410116    1274 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-ddp7p" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:07:12 addons-042725 kubelet[1274]: I1019 12:07:12.410219    1274 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-h5jpt" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:07:12 addons-042725 kubelet[1274]: I1019 12:07:12.421278    1274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-h5jpt" podStartSLOduration=1.844245206 podStartE2EDuration="28.421254953s" podCreationTimestamp="2025-10-19 12:06:44 +0000 UTC" firstStartedPulling="2025-10-19 12:06:45.210717593 +0000 UTC m=+47.137153850" lastFinishedPulling="2025-10-19 12:07:11.787727331 +0000 UTC m=+73.714163597" observedRunningTime="2025-10-19 12:07:12.420887465 +0000 UTC m=+74.347323730" watchObservedRunningTime="2025-10-19 12:07:12.421254953 +0000 UTC m=+74.347691218"
	Oct 19 12:07:13 addons-042725 kubelet[1274]: I1019 12:07:13.414970    1274 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-h5jpt" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:07:15 addons-042725 kubelet[1274]: I1019 12:07:15.426106    1274 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wlzbz" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:07:15 addons-042725 kubelet[1274]: I1019 12:07:15.440134    1274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-wlzbz" podStartSLOduration=2.122110474 podStartE2EDuration="31.440111792s" podCreationTimestamp="2025-10-19 12:06:44 +0000 UTC" firstStartedPulling="2025-10-19 12:06:45.230840598 +0000 UTC m=+47.157276845" lastFinishedPulling="2025-10-19 12:07:14.548841907 +0000 UTC m=+76.475278163" observedRunningTime="2025-10-19 12:07:15.439126807 +0000 UTC m=+77.365563072" watchObservedRunningTime="2025-10-19 12:07:15.440111792 +0000 UTC m=+77.366548060"
	Oct 19 12:07:16 addons-042725 kubelet[1274]: I1019 12:07:16.429812    1274 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-wlzbz" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 12:07:16 addons-042725 kubelet[1274]: E1019 12:07:16.689769    1274 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 19 12:07:16 addons-042725 kubelet[1274]: E1019 12:07:16.689877    1274 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a-gcr-creds podName:0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a nodeName:}" failed. No retries permitted until 2025-10-19 12:07:48.68985338 +0000 UTC m=+110.616289640 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a-gcr-creds") pod "registry-creds-764b6fb674-rg7vx" (UID: "0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a") : secret "registry-creds-gcr" not found
	Oct 19 12:07:19 addons-042725 kubelet[1274]: I1019 12:07:19.469945    1274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-jgc9g" podStartSLOduration=57.78326682 podStartE2EDuration="1m15.469923887s" podCreationTimestamp="2025-10-19 12:06:04 +0000 UTC" firstStartedPulling="2025-10-19 12:07:00.741778903 +0000 UTC m=+62.668215162" lastFinishedPulling="2025-10-19 12:07:18.428435963 +0000 UTC m=+80.354872229" observedRunningTime="2025-10-19 12:07:19.469677812 +0000 UTC m=+81.396114079" watchObservedRunningTime="2025-10-19 12:07:19.469923887 +0000 UTC m=+81.396360152"
	Oct 19 12:07:21 addons-042725 kubelet[1274]: I1019 12:07:21.473442    1274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-tfffr" podStartSLOduration=66.47860505 podStartE2EDuration="1m17.473402408s" podCreationTimestamp="2025-10-19 12:06:04 +0000 UTC" firstStartedPulling="2025-10-19 12:07:09.894250441 +0000 UTC m=+71.820686685" lastFinishedPulling="2025-10-19 12:07:20.889047797 +0000 UTC m=+82.815484043" observedRunningTime="2025-10-19 12:07:21.472249045 +0000 UTC m=+83.398685334" watchObservedRunningTime="2025-10-19 12:07:21.473402408 +0000 UTC m=+83.399838673"
	Oct 19 12:07:23 addons-042725 kubelet[1274]: I1019 12:07:23.479958    1274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-vcs5x" podStartSLOduration=67.875769675 podStartE2EDuration="1m12.479932535s" podCreationTimestamp="2025-10-19 12:06:11 +0000 UTC" firstStartedPulling="2025-10-19 12:07:18.251956073 +0000 UTC m=+80.178392331" lastFinishedPulling="2025-10-19 12:07:22.856118929 +0000 UTC m=+84.782555191" observedRunningTime="2025-10-19 12:07:23.479284305 +0000 UTC m=+85.405720569" watchObservedRunningTime="2025-10-19 12:07:23.479932535 +0000 UTC m=+85.406368800"
	Oct 19 12:07:25 addons-042725 kubelet[1274]: I1019 12:07:25.253625    1274 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 19 12:07:25 addons-042725 kubelet[1274]: I1019 12:07:25.253671    1274 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 19 12:07:26 addons-042725 kubelet[1274]: I1019 12:07:26.503774    1274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-vjzh8" podStartSLOduration=1.28927552 podStartE2EDuration="42.503749587s" podCreationTimestamp="2025-10-19 12:06:44 +0000 UTC" firstStartedPulling="2025-10-19 12:06:45.211337447 +0000 UTC m=+47.137773691" lastFinishedPulling="2025-10-19 12:07:26.425811508 +0000 UTC m=+88.352247758" observedRunningTime="2025-10-19 12:07:26.50371569 +0000 UTC m=+88.430151955" watchObservedRunningTime="2025-10-19 12:07:26.503749587 +0000 UTC m=+88.430185853"
	Oct 19 12:07:36 addons-042725 kubelet[1274]: I1019 12:07:36.157624    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c786de6-bdd3-40ed-9cb3-35d4a6b76e48" path="/var/lib/kubelet/pods/5c786de6-bdd3-40ed-9cb3-35d4a6b76e48/volumes"
	Oct 19 12:07:36 addons-042725 kubelet[1274]: I1019 12:07:36.158072    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fad6dc3-64c7-4843-ba88-98520d70fbac" path="/var/lib/kubelet/pods/7fad6dc3-64c7-4843-ba88-98520d70fbac/volumes"
	Oct 19 12:07:48 addons-042725 kubelet[1274]: E1019 12:07:48.729370    1274 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 19 12:07:48 addons-042725 kubelet[1274]: E1019 12:07:48.729462    1274 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a-gcr-creds podName:0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a nodeName:}" failed. No retries permitted until 2025-10-19 12:08:52.729446469 +0000 UTC m=+174.655882713 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a-gcr-creds") pod "registry-creds-764b6fb674-rg7vx" (UID: "0f5a9aa9-a7ea-45e4-9c93-90e5124cca2a") : secret "registry-creds-gcr" not found
	Oct 19 12:07:58 addons-042725 kubelet[1274]: I1019 12:07:58.150367    1274 scope.go:117] "RemoveContainer" containerID="0c42428a6c685018ff6269fdce70cb31a66aad8b9cc4e6f555e42c88b1ae5f60"
	Oct 19 12:07:58 addons-042725 kubelet[1274]: I1019 12:07:58.159767    1274 scope.go:117] "RemoveContainer" containerID="1dd57d7454aaae5ccbb741298cd1896d8366b30afe2baaa8d7d973e40518093f"
	Oct 19 12:08:02 addons-042725 kubelet[1274]: I1019 12:08:02.931687    1274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fzkc\" (UniqueName: \"kubernetes.io/projected/2c8198e2-f656-4274-b959-45650f1182b1-kube-api-access-7fzkc\") pod \"busybox\" (UID: \"2c8198e2-f656-4274-b959-45650f1182b1\") " pod="default/busybox"
	Oct 19 12:08:02 addons-042725 kubelet[1274]: I1019 12:08:02.931767    1274 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2c8198e2-f656-4274-b959-45650f1182b1-gcp-creds\") pod \"busybox\" (UID: \"2c8198e2-f656-4274-b959-45650f1182b1\") " pod="default/busybox"
	Oct 19 12:08:04 addons-042725 kubelet[1274]: I1019 12:08:04.631978    1274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.879317845 podStartE2EDuration="2.631953868s" podCreationTimestamp="2025-10-19 12:08:02 +0000 UTC" firstStartedPulling="2025-10-19 12:08:03.159028093 +0000 UTC m=+125.085464341" lastFinishedPulling="2025-10-19 12:08:03.911664102 +0000 UTC m=+125.838100364" observedRunningTime="2025-10-19 12:08:04.630997427 +0000 UTC m=+126.557433694" watchObservedRunningTime="2025-10-19 12:08:04.631953868 +0000 UTC m=+126.558390133"
	Oct 19 12:08:09 addons-042725 kubelet[1274]: E1019 12:08:09.973434    1274 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34834->127.0.0.1:34463: write tcp 127.0.0.1:34834->127.0.0.1:34463: write: broken pipe
	
	
	==> storage-provisioner [7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe] <==
	W1019 12:07:47.624476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:07:49.627599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:07:49.631864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:07:51.635160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:07:51.639921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:07:53.643387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:07:53.647308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:07:55.650686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:07:55.654405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:07:57.657583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:07:57.661512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:07:59.664493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:07:59.668305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:08:01.671961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:08:01.675605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:08:03.678554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:08:03.684077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:08:05.686608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:08:05.691160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:08:07.693964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:08:07.697866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:08:09.700302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:08:09.704079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:08:11.706234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:08:11.711151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-042725 -n addons-042725
helpers_test.go:269: (dbg) Run:  kubectl --context addons-042725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-p6q55 ingress-nginx-admission-patch-92jcm registry-creds-764b6fb674-rg7vx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-042725 describe pod ingress-nginx-admission-create-p6q55 ingress-nginx-admission-patch-92jcm registry-creds-764b6fb674-rg7vx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-042725 describe pod ingress-nginx-admission-create-p6q55 ingress-nginx-admission-patch-92jcm registry-creds-764b6fb674-rg7vx: exit status 1 (71.399852ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-p6q55" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-92jcm" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-rg7vx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-042725 describe pod ingress-nginx-admission-create-p6q55 ingress-nginx-admission-patch-92jcm registry-creds-764b6fb674-rg7vx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable headlamp --alsologtostderr -v=1: exit status 11 (242.05202ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:08:12.658847  365933 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:08:12.659181  365933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:12.659195  365933 out.go:374] Setting ErrFile to fd 2...
	I1019 12:08:12.659202  365933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:12.659513  365933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:08:12.659892  365933 mustload.go:65] Loading cluster: addons-042725
	I1019 12:08:12.660387  365933 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:12.660414  365933 addons.go:606] checking whether the cluster is paused
	I1019 12:08:12.660557  365933 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:12.660575  365933 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:08:12.661108  365933 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:08:12.680729  365933 ssh_runner.go:195] Run: systemctl --version
	I1019 12:08:12.680787  365933 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:08:12.698379  365933 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:08:12.793065  365933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:08:12.793163  365933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:08:12.822142  365933 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:08:12.822165  365933 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:08:12.822169  365933 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:08:12.822173  365933 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:08:12.822175  365933 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:08:12.822178  365933 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:08:12.822181  365933 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:08:12.822183  365933 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:08:12.822185  365933 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:08:12.822191  365933 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:08:12.822196  365933 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:08:12.822200  365933 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:08:12.822205  365933 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:08:12.822209  365933 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:08:12.822213  365933 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:08:12.822235  365933 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:08:12.822243  365933 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:08:12.822249  365933 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:08:12.822252  365933 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:08:12.822254  365933 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:08:12.822257  365933 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:08:12.822259  365933 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:08:12.822262  365933 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:08:12.822264  365933 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:08:12.822267  365933 cri.go:89] found id: ""
	I1019 12:08:12.822320  365933 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:08:12.836141  365933 out.go:203] 
	W1019 12:08:12.837415  365933 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:08:12.837445  365933 out.go:285] * 
	* 
	W1019 12:08:12.841487  365933 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:08:12.842718  365933 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-blgzl" [6454abdc-52b9-4aee-b757-5478bcd6d76d] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008462321s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (249.40321ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:08:20.666866  366640 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:08:20.667138  366640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:20.667148  366640 out.go:374] Setting ErrFile to fd 2...
	I1019 12:08:20.667152  366640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:20.667406  366640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:08:20.667731  366640 mustload.go:65] Loading cluster: addons-042725
	I1019 12:08:20.668130  366640 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:20.668150  366640 addons.go:606] checking whether the cluster is paused
	I1019 12:08:20.668247  366640 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:20.668262  366640 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:08:20.668712  366640 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:08:20.690057  366640 ssh_runner.go:195] Run: systemctl --version
	I1019 12:08:20.690120  366640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:08:20.709905  366640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:08:20.807613  366640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:08:20.807698  366640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:08:20.843228  366640 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:08:20.843253  366640 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:08:20.843260  366640 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:08:20.843264  366640 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:08:20.843268  366640 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:08:20.843273  366640 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:08:20.843277  366640 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:08:20.843281  366640 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:08:20.843286  366640 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:08:20.843293  366640 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:08:20.843301  366640 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:08:20.843305  366640 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:08:20.843309  366640 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:08:20.843315  366640 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:08:20.843331  366640 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:08:20.843340  366640 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:08:20.843347  366640 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:08:20.843352  366640 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:08:20.843356  366640 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:08:20.843360  366640 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:08:20.843364  366640 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:08:20.843368  366640 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:08:20.843372  366640 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:08:20.843375  366640 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:08:20.843379  366640 cri.go:89] found id: ""
	I1019 12:08:20.843447  366640 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:08:20.858234  366640 out.go:203] 
	W1019 12:08:20.859585  366640 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:08:20.859611  366640 out.go:285] * 
	* 
	W1019 12:08:20.863549  366640 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:08:20.864872  366640 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-042725 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-042725 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-042725 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [4d6126c4-e725-4c45-b603-d994fdd31b0c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [4d6126c4-e725-4c45-b603-d994fdd31b0c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [4d6126c4-e725-4c45-b603-d994fdd31b0c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003298419s
addons_test.go:967: (dbg) Run:  kubectl --context addons-042725 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 ssh "cat /opt/local-path-provisioner/pvc-275508de-1e47-445a-b7b2-b1fe712e92c0_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-042725 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-042725 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (248.617536ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:08:20.743463  366670 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:08:20.743737  366670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:20.743747  366670 out.go:374] Setting ErrFile to fd 2...
	I1019 12:08:20.743751  366670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:20.744013  366670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:08:20.744364  366670 mustload.go:65] Loading cluster: addons-042725
	I1019 12:08:20.744718  366670 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:20.744735  366670 addons.go:606] checking whether the cluster is paused
	I1019 12:08:20.744814  366670 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:20.744826  366670 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:08:20.745209  366670 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:08:20.763926  366670 ssh_runner.go:195] Run: systemctl --version
	I1019 12:08:20.763989  366670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:08:20.781380  366670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:08:20.884640  366670 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:08:20.884705  366670 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:08:20.915572  366670 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:08:20.915596  366670 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:08:20.915599  366670 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:08:20.915603  366670 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:08:20.915605  366670 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:08:20.915609  366670 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:08:20.915611  366670 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:08:20.915614  366670 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:08:20.915616  366670 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:08:20.915621  366670 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:08:20.915624  366670 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:08:20.915640  366670 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:08:20.915649  366670 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:08:20.915653  366670 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:08:20.915658  366670 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:08:20.915668  366670 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:08:20.915673  366670 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:08:20.915684  366670 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:08:20.915689  366670 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:08:20.915693  366670 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:08:20.915696  366670 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:08:20.915699  366670 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:08:20.915701  366670 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:08:20.915704  366670 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:08:20.915707  366670 cri.go:89] found id: ""
	I1019 12:08:20.915757  366670 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:08:20.932111  366670 out.go:203] 
	W1019 12:08:20.935316  366670 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:08:20.935350  366670 out.go:285] * 
	* 
	W1019 12:08:20.940350  366670 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:08:20.942275  366670 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-ddp7p" [08f99b85-a997-45d0-a756-3960d768dc50] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003673673s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (241.738887ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:08:15.411686  366140 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:08:15.411942  366140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:15.411951  366140 out.go:374] Setting ErrFile to fd 2...
	I1019 12:08:15.411955  366140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:15.412132  366140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:08:15.412395  366140 mustload.go:65] Loading cluster: addons-042725
	I1019 12:08:15.412788  366140 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:15.412806  366140 addons.go:606] checking whether the cluster is paused
	I1019 12:08:15.412885  366140 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:15.412897  366140 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:08:15.413235  366140 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:08:15.432978  366140 ssh_runner.go:195] Run: systemctl --version
	I1019 12:08:15.433046  366140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:08:15.452358  366140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:08:15.549980  366140 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:08:15.550078  366140 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:08:15.580788  366140 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:08:15.580826  366140 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:08:15.580831  366140 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:08:15.580834  366140 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:08:15.580843  366140 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:08:15.580849  366140 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:08:15.580852  366140 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:08:15.580856  366140 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:08:15.580860  366140 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:08:15.580874  366140 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:08:15.580880  366140 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:08:15.580883  366140 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:08:15.580886  366140 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:08:15.580888  366140 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:08:15.580891  366140 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:08:15.580902  366140 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:08:15.580910  366140 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:08:15.580914  366140 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:08:15.580916  366140 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:08:15.580919  366140 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:08:15.580921  366140 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:08:15.580923  366140 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:08:15.580925  366140 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:08:15.580928  366140 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:08:15.580930  366140 cri.go:89] found id: ""
	I1019 12:08:15.580983  366140 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:08:15.594779  366140 out.go:203] 
	W1019 12:08:15.596227  366140 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:08:15.596251  366140 out.go:285] * 
	* 
	W1019 12:08:15.600352  366140 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:08:15.601899  366140 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-8kxtn" [d521e97f-42d5-4e14-b9a7-312ec76e0217] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004703162s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable yakd --alsologtostderr -v=1: exit status 11 (290.012922ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:08:33.248531  368386 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:08:33.248931  368386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:33.248942  368386 out.go:374] Setting ErrFile to fd 2...
	I1019 12:08:33.248948  368386 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:33.249258  368386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:08:33.249766  368386 mustload.go:65] Loading cluster: addons-042725
	I1019 12:08:33.250321  368386 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:33.250348  368386 addons.go:606] checking whether the cluster is paused
	I1019 12:08:33.250507  368386 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:33.250526  368386 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:08:33.251201  368386 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:08:33.273096  368386 ssh_runner.go:195] Run: systemctl --version
	I1019 12:08:33.273466  368386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:08:33.296364  368386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:08:33.404735  368386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:08:33.404846  368386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:08:33.444203  368386 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:08:33.444242  368386 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:08:33.444249  368386 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:08:33.444253  368386 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:08:33.444258  368386 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:08:33.444263  368386 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:08:33.444267  368386 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:08:33.444271  368386 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:08:33.444275  368386 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:08:33.444283  368386 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:08:33.444288  368386 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:08:33.444292  368386 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:08:33.444296  368386 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:08:33.444300  368386 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:08:33.444304  368386 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:08:33.444311  368386 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:08:33.444315  368386 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:08:33.444321  368386 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:08:33.444324  368386 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:08:33.444328  368386 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:08:33.444332  368386 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:08:33.444336  368386 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:08:33.444340  368386 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:08:33.444344  368386 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:08:33.444348  368386 cri.go:89] found id: ""
	I1019 12:08:33.444404  368386 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:08:33.463163  368386 out.go:203] 
	W1019 12:08:33.464983  368386 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:08:33.465007  368386 out.go:285] * 
	* 
	W1019 12:08:33.471616  368386 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:08:33.473432  368386 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.30s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-h5jpt" [6034192f-2361-4c90-bbe0-6e827369a4ac] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003750311s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-042725 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-042725 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (231.663707ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:08:29.720099  368131 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:08:29.720368  368131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:29.720379  368131 out.go:374] Setting ErrFile to fd 2...
	I1019 12:08:29.720385  368131 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:08:29.720631  368131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:08:29.720932  368131 mustload.go:65] Loading cluster: addons-042725
	I1019 12:08:29.721287  368131 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:29.721308  368131 addons.go:606] checking whether the cluster is paused
	I1019 12:08:29.721406  368131 config.go:182] Loaded profile config "addons-042725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:08:29.721434  368131 host.go:66] Checking if "addons-042725" exists ...
	I1019 12:08:29.721840  368131 cli_runner.go:164] Run: docker container inspect addons-042725 --format={{.State.Status}}
	I1019 12:08:29.739403  368131 ssh_runner.go:195] Run: systemctl --version
	I1019 12:08:29.739481  368131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-042725
	I1019 12:08:29.756804  368131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/addons-042725/id_rsa Username:docker}
	I1019 12:08:29.852850  368131 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:08:29.852926  368131 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:08:29.882799  368131 cri.go:89] found id: "3ade97065f11c20acf1af73dd277992d95f3ae5802e8d07d5fd542d24af36313"
	I1019 12:08:29.882821  368131 cri.go:89] found id: "fb7af3710e7401b77c5f5a0079352d7506bea96318ae4bfe6a754d0740097851"
	I1019 12:08:29.882831  368131 cri.go:89] found id: "a97ff90dab8dea25f03d3f7c1155d8aa3cfae64b1b04ee1ca710026b1a06ca78"
	I1019 12:08:29.882835  368131 cri.go:89] found id: "2a1f70eb7742e777d4d8846eb8c1b4ca960cae64f379117b5e5898a8c8b8b965"
	I1019 12:08:29.882837  368131 cri.go:89] found id: "48fc5eed7d5dd92abcbbe1415c3bc4f946390bfd63cb7ee97c602b81060e5684"
	I1019 12:08:29.882841  368131 cri.go:89] found id: "c01ae707db89ef76015e668e98a815f4e3ad3052c5434509de9420c44e3fda77"
	I1019 12:08:29.882843  368131 cri.go:89] found id: "ffff44fc42fb17cfcb57192e6579faad127ef2b2abc84a6acbe337d7a0f709d3"
	I1019 12:08:29.882845  368131 cri.go:89] found id: "00707c3c4bab5accca474e464ca31f8655a089c334eb3313a4cf41d12bf3f873"
	I1019 12:08:29.882848  368131 cri.go:89] found id: "7e3eb26fc0ee18da3e57fabd864039da30fdcac9004c5b5f908c49ca09a3b452"
	I1019 12:08:29.882853  368131 cri.go:89] found id: "1be6499ceead7da115e5802e1170f992b9bb9455e91e1d4ebeb9cb0d2cf83275"
	I1019 12:08:29.882855  368131 cri.go:89] found id: "286cb01381b0e53806bc8db7b8e8d7bd63f8e107baf455496f995a7c58e050d4"
	I1019 12:08:29.882858  368131 cri.go:89] found id: "e74d01dfb7b1eb6e6538012deafae84a41e541cc1c1e0e7e9a4cfeb8527d1481"
	I1019 12:08:29.882860  368131 cri.go:89] found id: "15f3c32c2c1165c55dfa639a115a5532397ffa43f4b4ee3a9d0a37a0819d08a8"
	I1019 12:08:29.882862  368131 cri.go:89] found id: "fde2b1c07a1dad1f8f9570201ec18c80ad94199ff324412ad6590fc08a5bd5a0"
	I1019 12:08:29.882865  368131 cri.go:89] found id: "2f814989d818529b02bd1db5f99d44b5fe0a76b885f1d792e44cd419a3901bae"
	I1019 12:08:29.882871  368131 cri.go:89] found id: "3b868a98638bdf22749cba79f4cd68d2bca91f7bcb2c793dc93f31ef03a228db"
	I1019 12:08:29.882874  368131 cri.go:89] found id: "1089a2c2700f20dc05a7d9d8e35be1dc52f9839a419bfac7de25596a2fa78ff0"
	I1019 12:08:29.882878  368131 cri.go:89] found id: "7a4e144a7b1ee2098ab09dc9686ddbcbea00a6cac47bd26063d82e54fd0caffe"
	I1019 12:08:29.882880  368131 cri.go:89] found id: "392500e9aeeb9faab9c877896ab5bcf4be2eb4c5cc7e34f3ecb848ee0419a963"
	I1019 12:08:29.882883  368131 cri.go:89] found id: "cde6c4794a9e27fcebb76961b52b92a3b3bf22958cbcac3e9b69a6e55c1a62c1"
	I1019 12:08:29.882886  368131 cri.go:89] found id: "396948a693fd82d13884b3c38eabec04f43cb203092469f112f5217ac5d35554"
	I1019 12:08:29.882888  368131 cri.go:89] found id: "09349ccfaf4c06a44db2da4aa4f209972cde3c6580af51d6a5e63ab22ed20fec"
	I1019 12:08:29.882890  368131 cri.go:89] found id: "ae636ce0179629b97346afb19751d1366d6bd68fcec6f23e5e4b4bbd18de8351"
	I1019 12:08:29.882893  368131 cri.go:89] found id: "0d69b9d0659dd3cbf185ed2e86cade60d390fd4f059908956f8e03ea3000cb3e"
	I1019 12:08:29.882895  368131 cri.go:89] found id: ""
	I1019 12:08:29.882932  368131 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:08:29.896673  368131 out.go:203] 
	W1019 12:08:29.897831  368131 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:08:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:08:29.897849  368131 out.go:285] * 
	* 
	W1019 12:08:29.901922  368131 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:08:29.903259  368131 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-042725 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-688409 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-688409 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-whg26" [593134e8-59ef-4529-ad43-9931d16be761] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-688409 -n functional-688409
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-19 12:23:58.084879664 +0000 UTC m=+1125.775323511
functional_test.go:1645: (dbg) Run:  kubectl --context functional-688409 describe po hello-node-connect-7d85dfc575-whg26 -n default
functional_test.go:1645: (dbg) kubectl --context functional-688409 describe po hello-node-connect-7d85dfc575-whg26 -n default:
Name:             hello-node-connect-7d85dfc575-whg26
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-688409/192.168.49.2
Start Time:       Sun, 19 Oct 2025 12:13:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lg8jp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lg8jp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-whg26 to functional-688409
Normal   Pulling    7m9s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m9s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-688409 logs hello-node-connect-7d85dfc575-whg26 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-688409 logs hello-node-connect-7d85dfc575-whg26 -n default: exit status 1 (67.333258ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-whg26" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-688409 logs hello-node-connect-7d85dfc575-whg26 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-688409 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-whg26
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-688409/192.168.49.2
Start Time:       Sun, 19 Oct 2025 12:13:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lg8jp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lg8jp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-whg26 to functional-688409
Normal   Pulling    7m9s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m9s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m9s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-688409 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-688409 logs -l app=hello-node-connect: exit status 1 (62.114309ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-whg26" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-688409 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-688409 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.218.253
IPs:                      10.110.218.253
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31638/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-688409
helpers_test.go:243: (dbg) docker inspect functional-688409:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37463609cbb37070e1e140a88f15fa0bb40de428b4a219e938cb7eb11543da2b",
	        "Created": "2025-10-19T12:11:56.502278262Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 379277,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:11:56.533760347Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/37463609cbb37070e1e140a88f15fa0bb40de428b4a219e938cb7eb11543da2b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37463609cbb37070e1e140a88f15fa0bb40de428b4a219e938cb7eb11543da2b/hostname",
	        "HostsPath": "/var/lib/docker/containers/37463609cbb37070e1e140a88f15fa0bb40de428b4a219e938cb7eb11543da2b/hosts",
	        "LogPath": "/var/lib/docker/containers/37463609cbb37070e1e140a88f15fa0bb40de428b4a219e938cb7eb11543da2b/37463609cbb37070e1e140a88f15fa0bb40de428b4a219e938cb7eb11543da2b-json.log",
	        "Name": "/functional-688409",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-688409:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-688409",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37463609cbb37070e1e140a88f15fa0bb40de428b4a219e938cb7eb11543da2b",
	                "LowerDir": "/var/lib/docker/overlay2/fa94120c3c84d65cebbf8d10ab93845da024b46eb2d01f646bfdb4e6404604a3-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fa94120c3c84d65cebbf8d10ab93845da024b46eb2d01f646bfdb4e6404604a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fa94120c3c84d65cebbf8d10ab93845da024b46eb2d01f646bfdb4e6404604a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fa94120c3c84d65cebbf8d10ab93845da024b46eb2d01f646bfdb4e6404604a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-688409",
	                "Source": "/var/lib/docker/volumes/functional-688409/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-688409",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-688409",
	                "name.minikube.sigs.k8s.io": "functional-688409",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b0f1d780f19fbb97b5e9f5e96ebddddd76a4e5cc87ecbdd46dd556cae610d12",
	            "SandboxKey": "/var/run/docker/netns/7b0f1d780f19",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-688409": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:13:1f:13:9a:9e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b46f9926e4101503c69bf914a0994505f5ff16a6e96e4ce9cf0e4d6abbe580d5",
	                    "EndpointID": "adace42df475fae2f45c728f3787c712b128b45482c47c9be1004afd8941c82a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-688409",
	                        "37463609cbb3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-688409 -n functional-688409
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-688409 logs -n 25: (1.293268495s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-688409 image ls                                                                                                                                      │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ image          │ functional-688409 image load --daemon kicbase/echo-server:functional-688409 --alsologtostderr                                                                   │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ image          │ functional-688409 image ls                                                                                                                                      │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ ssh            │ functional-688409 ssh sudo cat /etc/ssl/certs/355262.pem                                                                                                        │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ image          │ functional-688409 image save kicbase/echo-server:functional-688409 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ ssh            │ functional-688409 ssh sudo cat /usr/share/ca-certificates/355262.pem                                                                                            │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ image          │ functional-688409 image rm kicbase/echo-server:functional-688409 --alsologtostderr                                                                              │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ image          │ functional-688409 image ls                                                                                                                                      │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ image          │ functional-688409 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ ssh            │ functional-688409 ssh sudo cat /etc/ssl/certs/3552622.pem                                                                                                       │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ image          │ functional-688409 image save --daemon kicbase/echo-server:functional-688409 --alsologtostderr                                                                   │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ ssh            │ functional-688409 ssh sudo cat /usr/share/ca-certificates/3552622.pem                                                                                           │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ ssh            │ functional-688409 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ license        │                                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ ssh            │ functional-688409 ssh sudo cat /etc/test/nested/copy/355262/hosts                                                                                               │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ image          │ functional-688409 image ls --format short --alsologtostderr                                                                                                     │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ image          │ functional-688409 image ls --format yaml --alsologtostderr                                                                                                      │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ ssh            │ functional-688409 ssh pgrep buildkitd                                                                                                                           │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │                     │
	│ image          │ functional-688409 image build -t localhost/my-image:functional-688409 testdata/build --alsologtostderr                                                          │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ image          │ functional-688409 image ls                                                                                                                                      │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ image          │ functional-688409 image ls --format json --alsologtostderr                                                                                                      │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ image          │ functional-688409 image ls --format table --alsologtostderr                                                                                                     │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ update-context │ functional-688409 update-context --alsologtostderr -v=2                                                                                                         │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ update-context │ functional-688409 update-context --alsologtostderr -v=2                                                                                                         │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	│ update-context │ functional-688409 update-context --alsologtostderr -v=2                                                                                                         │ functional-688409 │ jenkins │ v1.37.0 │ 19 Oct 25 12:14 UTC │ 19 Oct 25 12:14 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:14:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:14:18.939146  392594 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:14:18.939406  392594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:14:18.939416  392594 out.go:374] Setting ErrFile to fd 2...
	I1019 12:14:18.939436  392594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:14:18.939797  392594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:14:18.940357  392594 out.go:368] Setting JSON to false
	I1019 12:14:18.941372  392594 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7007,"bootTime":1760869052,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:14:18.941496  392594 start.go:141] virtualization: kvm guest
	I1019 12:14:18.943418  392594 out.go:179] * [functional-688409] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:14:18.945068  392594 notify.go:220] Checking for updates...
	I1019 12:14:18.945103  392594 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:14:18.946601  392594 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:14:18.947895  392594 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:14:18.949265  392594 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:14:18.953929  392594 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:14:18.955230  392594 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:14:18.956861  392594 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:14:18.957340  392594 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:14:18.980269  392594 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:14:18.980440  392594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:14:19.037098  392594 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 12:14:19.026655311 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:14:19.037204  392594 docker.go:318] overlay module found
	I1019 12:14:19.038984  392594 out.go:179] * Using the docker driver based on existing profile
	I1019 12:14:19.040129  392594 start.go:305] selected driver: docker
	I1019 12:14:19.040143  392594 start.go:925] validating driver "docker" against &{Name:functional-688409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-688409 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:14:19.040239  392594 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:14:19.040334  392594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:14:19.098006  392594 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 12:14:19.087495829 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:14:19.098677  392594 cni.go:84] Creating CNI manager for ""
	I1019 12:14:19.098753  392594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:14:19.098814  392594 start.go:349] cluster config:
	{Name:functional-688409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-688409 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:14:19.100586  392594 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 19 12:14:28 functional-688409 crio[3605]: time="2025-10-19T12:14:28.321839787Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=ec617904-262b-4db4-8946-fe1a8b2e6ab4 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:14:28 functional-688409 crio[3605]: time="2025-10-19T12:14:28.32319515Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Oct 19 12:14:34 functional-688409 crio[3605]: time="2025-10-19T12:14:34.122857993Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" id=ec617904-262b-4db4-8946-fe1a8b2e6ab4 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:14:34 functional-688409 crio[3605]: time="2025-10-19T12:14:34.123565473Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=48a5546c-fe09-447c-b178-74c8cf34064f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:14:34 functional-688409 crio[3605]: time="2025-10-19T12:14:34.126355682Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=03de7f9d-600b-499e-ab5d-5a99b3c79f8b name=/runtime.v1.ImageService/PullImage
	Oct 19 12:14:34 functional-688409 crio[3605]: time="2025-10-19T12:14:34.126743136Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=bea0e16c-01d6-4843-a6ff-4341ee733aff name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:14:34 functional-688409 crio[3605]: time="2025-10-19T12:14:34.132166249Z" level=info msg="Creating container: default/mysql-5bb876957f-hhrsg/mysql" id=61255a3f-38ba-4c49-b1c4-1d0628325a07 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:14:34 functional-688409 crio[3605]: time="2025-10-19T12:14:34.133913478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:14:34 functional-688409 crio[3605]: time="2025-10-19T12:14:34.140035244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:14:34 functional-688409 crio[3605]: time="2025-10-19T12:14:34.140836007Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:14:34 functional-688409 crio[3605]: time="2025-10-19T12:14:34.165011883Z" level=info msg="Created container 5e29028b52626fd5025e90d56c9a5f6e81351ea8c7454d5698a5b003a9ca9742: default/mysql-5bb876957f-hhrsg/mysql" id=61255a3f-38ba-4c49-b1c4-1d0628325a07 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:14:34 functional-688409 crio[3605]: time="2025-10-19T12:14:34.165647126Z" level=info msg="Starting container: 5e29028b52626fd5025e90d56c9a5f6e81351ea8c7454d5698a5b003a9ca9742" id=889f3ba1-5fc1-4ab0-8ded-f0370fe67eb1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:14:34 functional-688409 crio[3605]: time="2025-10-19T12:14:34.167700683Z" level=info msg="Started container" PID=7700 containerID=5e29028b52626fd5025e90d56c9a5f6e81351ea8c7454d5698a5b003a9ca9742 description=default/mysql-5bb876957f-hhrsg/mysql id=889f3ba1-5fc1-4ab0-8ded-f0370fe67eb1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ce1ca5a99c91087bdcd9b02c48883bcf6e571cf9a37e9d5917eda47498f3cf39
	Oct 19 12:14:34 functional-688409 crio[3605]: time="2025-10-19T12:14:34.584372081Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a5728ff3-cbf4-452e-97f8-c51342cede62 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:15:08 functional-688409 crio[3605]: time="2025-10-19T12:15:08.581142111Z" level=info msg="Stopping pod sandbox: 81bc591bbeaf9b42003b028820324a6858a8bb20c7e151b9ecf31c765fa501d3" id=b500e13b-8333-4069-9461-2c3390cc6e3c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 12:15:08 functional-688409 crio[3605]: time="2025-10-19T12:15:08.581217982Z" level=info msg="Stopped pod sandbox (already stopped): 81bc591bbeaf9b42003b028820324a6858a8bb20c7e151b9ecf31c765fa501d3" id=b500e13b-8333-4069-9461-2c3390cc6e3c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 19 12:15:08 functional-688409 crio[3605]: time="2025-10-19T12:15:08.581618942Z" level=info msg="Removing pod sandbox: 81bc591bbeaf9b42003b028820324a6858a8bb20c7e151b9ecf31c765fa501d3" id=8783d1cb-15fb-4f4e-aa09-1069584ad7d7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 12:15:08 functional-688409 crio[3605]: time="2025-10-19T12:15:08.584687342Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:15:08 functional-688409 crio[3605]: time="2025-10-19T12:15:08.584764389Z" level=info msg="Removed pod sandbox: 81bc591bbeaf9b42003b028820324a6858a8bb20c7e151b9ecf31c765fa501d3" id=8783d1cb-15fb-4f4e-aa09-1069584ad7d7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 19 12:15:25 functional-688409 crio[3605]: time="2025-10-19T12:15:25.584316631Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9a2eeb5f-1046-4973-b69c-7c839f8c52dc name=/runtime.v1.ImageService/PullImage
	Oct 19 12:15:26 functional-688409 crio[3605]: time="2025-10-19T12:15:26.583883018Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f6b6ce13-b8cf-4b9e-8b5d-a425ee3c8322 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:16:48 functional-688409 crio[3605]: time="2025-10-19T12:16:48.584825747Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3cd834e8-8c4e-46da-ab6e-141b69677954 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:16:49 functional-688409 crio[3605]: time="2025-10-19T12:16:49.584665045Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f0b7489a-e28b-44fe-be23-e7839a984053 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:19:29 functional-688409 crio[3605]: time="2025-10-19T12:19:29.584566833Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bd15d70a-072d-45fa-84ca-4e8ac9b9d2dd name=/runtime.v1.ImageService/PullImage
	Oct 19 12:19:41 functional-688409 crio[3605]: time="2025-10-19T12:19:41.584465708Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cc20d57b-212d-4a1c-8ba4-7d693b8a8189 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5e29028b52626       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   ce1ca5a99c910       mysql-5bb876957f-hhrsg                       default
	25d1569750c5a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   1a30e1cf93ffd       kubernetes-dashboard-855c9754f9-8dmjw        kubernetes-dashboard
	e9f676192b1b1       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   0a74206fe37d1       dashboard-metrics-scraper-77bf4d6c4c-nqcsg   kubernetes-dashboard
	9e54bb24d49ce       docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115                  9 minutes ago       Running             myfrontend                  0                   3c629f9c5028e       sp-pod                                       default
	ba4bace5a36db       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   7b7ed8471c8d6       busybox-mount                                default
	5b7e7d5614dc1       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  9 minutes ago       Running             nginx                       0                   e307b4a9eccd5       nginx-svc                                    default
	715804d1de985       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   da4e3d081f431       storage-provisioner                          kube-system
	67433c3aa06cc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   7cf2a3f86edf0       kube-apiserver-functional-688409             kube-system
	677c1bc0c4f5f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   c6943111fd439       kube-controller-manager-functional-688409    kube-system
	e4a1b98ca7eeb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   5badd1ce6920f       etcd-functional-688409                       kube-system
	45c67cbb1d3e6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   29a0b549017f2       kindnet-wksp7                                kube-system
	81a26d46f47f5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   65f323efba9b2       kube-proxy-7qd48                             kube-system
	7b6500c1776db       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Running             kube-scheduler              1                   3e99a99ed230e       kube-scheduler-functional-688409             kube-system
	7580660749aef       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   da4e3d081f431       storage-provisioner                          kube-system
	bc615c6b2464b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   84cf509b80bde       coredns-66bc5c9577-q8npq                     kube-system
	f3fedc2744fdc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   84cf509b80bde       coredns-66bc5c9577-q8npq                     kube-system
	110697ce74416       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   65f323efba9b2       kube-proxy-7qd48                             kube-system
	6d30b86f83d09       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   29a0b549017f2       kindnet-wksp7                                kube-system
	a987ca1a44ae9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   3e99a99ed230e       kube-scheduler-functional-688409             kube-system
	a856466ebcba2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     0                   c6943111fd439       kube-controller-manager-functional-688409    kube-system
	edbc3af1a314e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   5badd1ce6920f       etcd-functional-688409                       kube-system
	
	
	==> coredns [bc615c6b2464b258346718819db78e49ce232d6a5001c987432e1b19d3548a37] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39485 - 21856 "HINFO IN 5728832163982070794.4779534497686695862. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.874763351s
	
	
	==> coredns [f3fedc2744fdc8c3c1a0ef57e7662cc3fc7466715e346ffabee83419e920c540] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45795 - 47568 "HINFO IN 4158188995430360237.4980365477662330341. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.475646938s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-688409
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-688409
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=functional-688409
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_12_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:12:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-688409
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:23:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:23:42 +0000   Sun, 19 Oct 2025 12:12:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:23:42 +0000   Sun, 19 Oct 2025 12:12:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:23:42 +0000   Sun, 19 Oct 2025 12:12:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:23:42 +0000   Sun, 19 Oct 2025 12:12:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-688409
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                d102c7cd-c111-44af-a35b-2222e344100a
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-c79vz                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-whg26           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-hhrsg                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m32s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 coredns-66bc5c9577-q8npq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-688409                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-wksp7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-688409              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-688409     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-7qd48                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-688409              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-nqcsg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8dmjw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-688409 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-688409 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-688409 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-688409 event: Registered Node functional-688409 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-688409 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-688409 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-688409 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-688409 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-688409 event: Registered Node functional-688409 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 31 d3 aa 8a bd 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 bc e1 50 25 8b 08 06
	[Oct19 12:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.045444] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023837] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +2.047737] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +8.512033] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[Oct19 12:09] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[ +32.252549] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	
	
	==> etcd [e4a1b98ca7eeb2832b03c2ad19f220d260cc4483d18a5aa0f665296acce0d7d2] <==
	{"level":"warn","ts":"2025-10-19T12:13:28.473278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.479309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.486394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.497407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.504124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.514599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.521285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.527411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.534123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.539927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.545891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.563212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.568847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.574800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.580415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.595278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.601994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.608125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:13:28.653846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:14:35.186783Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"201.592114ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T12:14:35.186883Z","caller":"traceutil/trace.go:172","msg":"trace[1127936080] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:851; }","duration":"201.708803ms","start":"2025-10-19T12:14:34.985158Z","end":"2025-10-19T12:14:35.186867Z","steps":["trace[1127936080] 'range keys from in-memory index tree'  (duration: 201.52435ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:14:40.351897Z","caller":"traceutil/trace.go:172","msg":"trace[2107924217] transaction","detail":"{read_only:false; response_revision:854; number_of_response:1; }","duration":"134.817091ms","start":"2025-10-19T12:14:40.217055Z","end":"2025-10-19T12:14:40.351872Z","steps":["trace[2107924217] 'process raft request'  (duration: 63.422566ms)","trace[2107924217] 'compare'  (duration: 71.273483ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T12:23:28.199051Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1126}
	{"level":"info","ts":"2025-10-19T12:23:28.218302Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1126,"took":"18.832995ms","hash":679396988,"current-db-size-bytes":3395584,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1552384,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-10-19T12:23:28.218350Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":679396988,"revision":1126,"compact-revision":-1}
	
	
	==> etcd [edbc3af1a314e7ad14c39e8f67882a342b81346afa0f7886eaebe075d8720b91] <==
	{"level":"warn","ts":"2025-10-19T12:12:10.476036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:12:10.481968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:12:10.487817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:12:10.500451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:12:10.506417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:12:10.512161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:12:10.561246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45380","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T12:13:05.649063Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T12:13:05.649151Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-688409","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-19T12:13:05.649245Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T12:13:05.649368Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T12:13:05.650848Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T12:13:05.650912Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-19T12:13:05.650927Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T12:13:05.650979Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-19T12:13:05.650982Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-19T12:13:05.650977Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-19T12:13:05.650939Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T12:13:05.651016Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T12:13:05.651027Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-19T12:13:05.650994Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T12:13:05.652820Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-19T12:13:05.652873Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T12:13:05.652900Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-19T12:13:05.652916Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-688409","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 12:23:59 up  2:06,  0 user,  load average: 0.10, 0.30, 0.89
	Linux functional-688409 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [45c67cbb1d3e614b9b741b97f33e7b6a5b40b3c6542b263e4599101d8e19dde3] <==
	I1019 12:21:55.842444       1 main.go:301] handling current node
	I1019 12:22:05.838088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:22:05.838144       1 main.go:301] handling current node
	I1019 12:22:15.842172       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:22:15.842208       1 main.go:301] handling current node
	I1019 12:22:25.846250       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:22:25.846285       1 main.go:301] handling current node
	I1019 12:22:35.837802       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:22:35.837841       1 main.go:301] handling current node
	I1019 12:22:45.839296       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:22:45.839339       1 main.go:301] handling current node
	I1019 12:22:55.841367       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:22:55.841398       1 main.go:301] handling current node
	I1019 12:23:05.837459       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:23:05.837494       1 main.go:301] handling current node
	I1019 12:23:15.841676       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:23:15.841713       1 main.go:301] handling current node
	I1019 12:23:25.846047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:23:25.846094       1 main.go:301] handling current node
	I1019 12:23:35.837755       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:23:35.837795       1 main.go:301] handling current node
	I1019 12:23:45.838352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:23:45.838387       1 main.go:301] handling current node
	I1019 12:23:55.840583       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:23:55.840620       1 main.go:301] handling current node
	
	
	==> kindnet [6d30b86f83d09def100795d4281c503cf3bce887099e267c82ef8643ff0a05b8] <==
	I1019 12:12:19.567900       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:12:19.568174       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1019 12:12:19.568314       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:12:19.568328       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:12:19.568347       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:12:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:12:19.770650       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:12:19.770677       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:12:19.770690       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:12:19.770858       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:12:20.089668       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:12:20.089704       1 metrics.go:72] Registering metrics
	I1019 12:12:20.089753       1 controller.go:711] "Syncing nftables rules"
	I1019 12:12:29.773112       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:12:29.773159       1 main.go:301] handling current node
	I1019 12:12:39.771119       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:12:39.771168       1 main.go:301] handling current node
	I1019 12:12:49.770174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1019 12:12:49.770207       1 main.go:301] handling current node
	
	
	==> kube-apiserver [67433c3aa06ccd0487795221334afb3517e6f843238692c8c5e8991cbe9fdf2d] <==
	I1019 12:13:29.124474       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:13:30.007000       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1019 12:13:30.213184       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1019 12:13:30.214339       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:13:30.218191       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:13:30.612280       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 12:13:30.700521       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:13:30.709047       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:13:30.753962       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:13:30.758810       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:13:32.785124       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:13:53.122072       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.91.133"}
	I1019 12:13:57.760401       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.218.253"}
	I1019 12:13:57.863672       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.38.104"}
	I1019 12:13:58.654494       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.5.135"}
	E1019 12:14:12.882599       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:39214: use of closed network connection
	I1019 12:14:19.899276       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:14:20.000314       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.120.60"}
	I1019 12:14:20.013513       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.89.155"}
	E1019 12:14:21.034226       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51136: use of closed network connection
	I1019 12:14:27.950604       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.106.98.67"}
	E1019 12:14:41.070926       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32906: use of closed network connection
	E1019 12:14:41.965648       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32928: use of closed network connection
	E1019 12:14:43.245313       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32934: use of closed network connection
	I1019 12:23:29.029372       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [677c1bc0c4f5f6a29f2c5f23feb4a8232f7e48459e7b997c29babd3e0ea91793] <==
	I1019 12:13:32.429446       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:13:32.429583       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 12:13:32.429613       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 12:13:32.429959       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:13:32.429987       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:13:32.430006       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 12:13:32.430463       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 12:13:32.430488       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 12:13:32.430577       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 12:13:32.430664       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 12:13:32.430695       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-688409"
	I1019 12:13:32.430732       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 12:13:32.433931       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:13:32.433944       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 12:13:32.433950       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 12:13:32.435280       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:13:32.436120       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 12:13:32.438465       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 12:13:32.445770       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1019 12:14:19.945789       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 12:14:19.950014       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 12:14:19.952560       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 12:14:19.954046       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 12:14:19.955532       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1019 12:14:19.960547       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [a856466ebcba2bdf37f7e4562f0ee29240d4fe1df3a7ff43b4ec8ce95225b6bd] <==
	I1019 12:12:17.932189       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 12:12:17.932340       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 12:12:17.932342       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-688409"
	I1019 12:12:17.932436       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 12:12:17.933097       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 12:12:17.933126       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 12:12:17.933226       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 12:12:17.933247       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 12:12:17.933373       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 12:12:17.933494       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 12:12:17.933502       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 12:12:17.933675       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:12:17.934415       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 12:12:17.934500       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 12:12:17.935812       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:12:17.936787       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 12:12:17.936853       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 12:12:17.936899       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 12:12:17.936909       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 12:12:17.936916       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 12:12:17.939053       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:12:17.942322       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:12:17.943123       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-688409" podCIDRs=["10.244.0.0/24"]
	I1019 12:12:17.949137       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:12:32.934756       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [110697ce7441607f418c78e0ea72329f4b7c38aee008e1c16514adb419e017a7] <==
	I1019 12:12:19.410663       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:12:19.473178       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:12:19.573267       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:12:19.573315       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 12:12:19.573407       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:12:19.592379       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:12:19.592449       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:12:19.597382       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:12:19.597783       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:12:19.597803       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:12:19.599123       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:12:19.599167       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:12:19.599170       1 config.go:200] "Starting service config controller"
	I1019 12:12:19.599185       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:12:19.599208       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:12:19.599228       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:12:19.599244       1 config.go:309] "Starting node config controller"
	I1019 12:12:19.599256       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:12:19.599262       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:12:19.699303       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 12:12:19.699323       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:12:19.699354       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [81a26d46f47f59b4175f305377a5418a1f4507429b3cc6c605477efade8b12c8] <==
	E1019 12:12:55.623280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-688409&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:12:57.113189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-688409&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:12:59.507695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-688409&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:13:15.629260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-688409&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:13:26.109242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-688409&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1019 12:13:46.823293       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:13:46.823335       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1019 12:13:46.823473       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:13:46.842126       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:13:46.842173       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:13:46.847730       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:13:46.848048       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:13:46.848073       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:13:46.849324       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:13:46.849349       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:13:46.849383       1 config.go:200] "Starting service config controller"
	I1019 12:13:46.849390       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:13:46.849404       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:13:46.849410       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:13:46.849886       1 config.go:309] "Starting node config controller"
	I1019 12:13:46.849918       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:13:46.849926       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:13:46.949563       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:13:46.949581       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 12:13:46.949579       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7b6500c1776dba0e4b31f21984ee141595286b13e070135a6a0c49f36ff186f3] <==
	E1019 12:13:15.636593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:13:15.886240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:13:16.130942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:13:16.432735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:13:16.467926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:13:20.109519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:13:22.045983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 12:13:22.110506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:13:23.211616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:13:23.726012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:13:23.904817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:13:24.030348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:13:24.159514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:13:24.388535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:13:24.460786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:13:25.965180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:13:25.979706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:13:27.042110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:59854->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:13:27.042296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:59982->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:13:27.042298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:49320->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:13:27.042298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:60002->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:13:27.042406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:59840->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 12:13:27.042487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:59974->192.168.49.2:8441: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:13:29.023954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1019 12:13:38.871750       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [a987ca1a44ae959b5198271f3e2be2ff7c98a1bdca24b43ed1db941bf45397f8] <==
	E1019 12:12:10.947093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:12:10.947335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:12:10.947414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 12:12:10.947451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:12:10.947482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:12:10.947505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:12:10.947566       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:12:10.947600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:12:10.947648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:12:11.765710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:12:11.766697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:12:11.794177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:12:11.899908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:12:11.980620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:12:12.011000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 12:12:12.018133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:12:12.031092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:12:12.048568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1019 12:12:13.744736       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:12:54.919996       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1019 12:12:54.920065       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:12:54.920066       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1019 12:12:54.920090       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1019 12:12:54.920235       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1019 12:12:54.920265       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 19 12:21:22 functional-688409 kubelet[4227]: E1019 12:21:22.584072    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	Oct 19 12:21:23 functional-688409 kubelet[4227]: E1019 12:21:23.583349    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c79vz" podUID="bed09578-c586-490d-92c7-272ca734a112"
	Oct 19 12:21:37 functional-688409 kubelet[4227]: E1019 12:21:37.583812    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	Oct 19 12:21:38 functional-688409 kubelet[4227]: E1019 12:21:38.583728    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c79vz" podUID="bed09578-c586-490d-92c7-272ca734a112"
	Oct 19 12:21:48 functional-688409 kubelet[4227]: E1019 12:21:48.584603    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	Oct 19 12:21:52 functional-688409 kubelet[4227]: E1019 12:21:52.583554    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c79vz" podUID="bed09578-c586-490d-92c7-272ca734a112"
	Oct 19 12:22:03 functional-688409 kubelet[4227]: E1019 12:22:03.583843    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	Oct 19 12:22:07 functional-688409 kubelet[4227]: E1019 12:22:07.584158    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c79vz" podUID="bed09578-c586-490d-92c7-272ca734a112"
	Oct 19 12:22:18 functional-688409 kubelet[4227]: E1019 12:22:18.584822    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c79vz" podUID="bed09578-c586-490d-92c7-272ca734a112"
	Oct 19 12:22:18 functional-688409 kubelet[4227]: E1019 12:22:18.584917    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	Oct 19 12:22:29 functional-688409 kubelet[4227]: E1019 12:22:29.583374    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	Oct 19 12:22:32 functional-688409 kubelet[4227]: E1019 12:22:32.583634    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c79vz" podUID="bed09578-c586-490d-92c7-272ca734a112"
	Oct 19 12:22:41 functional-688409 kubelet[4227]: E1019 12:22:41.584141    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	Oct 19 12:22:45 functional-688409 kubelet[4227]: E1019 12:22:45.583921    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c79vz" podUID="bed09578-c586-490d-92c7-272ca734a112"
	Oct 19 12:22:54 functional-688409 kubelet[4227]: E1019 12:22:54.584001    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	Oct 19 12:22:58 functional-688409 kubelet[4227]: E1019 12:22:58.584436    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c79vz" podUID="bed09578-c586-490d-92c7-272ca734a112"
	Oct 19 12:23:05 functional-688409 kubelet[4227]: E1019 12:23:05.583989    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	Oct 19 12:23:11 functional-688409 kubelet[4227]: E1019 12:23:11.584285    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c79vz" podUID="bed09578-c586-490d-92c7-272ca734a112"
	Oct 19 12:23:16 functional-688409 kubelet[4227]: E1019 12:23:16.583670    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	Oct 19 12:23:26 functional-688409 kubelet[4227]: E1019 12:23:26.583890    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c79vz" podUID="bed09578-c586-490d-92c7-272ca734a112"
	Oct 19 12:23:27 functional-688409 kubelet[4227]: E1019 12:23:27.584014    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	Oct 19 12:23:39 functional-688409 kubelet[4227]: E1019 12:23:39.584071    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	Oct 19 12:23:41 functional-688409 kubelet[4227]: E1019 12:23:41.583672    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c79vz" podUID="bed09578-c586-490d-92c7-272ca734a112"
	Oct 19 12:23:53 functional-688409 kubelet[4227]: E1019 12:23:53.583362    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c79vz" podUID="bed09578-c586-490d-92c7-272ca734a112"
	Oct 19 12:23:53 functional-688409 kubelet[4227]: E1019 12:23:53.583480    4227 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-whg26" podUID="593134e8-59ef-4529-ad43-9931d16be761"
	
	
	==> kubernetes-dashboard [25d1569750c5ab067d875d8521dff97d6cb2683881e3c0d12ffcc5fa438c9a64] <==
	2025/10/19 12:14:24 Using namespace: kubernetes-dashboard
	2025/10/19 12:14:24 Using in-cluster config to connect to apiserver
	2025/10/19 12:14:24 Using secret token for csrf signing
	2025/10/19 12:14:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 12:14:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 12:14:24 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 12:14:24 Generating JWE encryption key
	2025/10/19 12:14:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 12:14:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 12:14:24 Initializing JWE encryption key from synchronized object
	2025/10/19 12:14:24 Creating in-cluster Sidecar client
	2025/10/19 12:14:24 Successful request to sidecar
	2025/10/19 12:14:24 Serving insecurely on HTTP port: 9090
	2025/10/19 12:14:24 Starting overwatch
	
	
	==> storage-provisioner [715804d1de985f0a181e5c10cc6b701f0f062c2c9ef820e2cdf3176ec356c52c] <==
	W1019 12:23:34.635919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:36.639036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:36.643029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:38.646083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:38.649978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:40.652907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:40.657343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:42.660481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:42.664054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:44.666514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:44.670965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:46.674127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:46.677882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:48.680676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:48.684886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:50.689986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:50.694341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:52.697211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:52.701767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:54.704628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:54.708303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:56.711003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:56.714706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:58.717332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:23:58.721350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7580660749aef12c453d17121a7a1348590c7370d6e434be87b5342ac1cd6a49] <==
	I1019 12:12:55.510657       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 12:12:55.512442       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-688409 -n functional-688409
helpers_test.go:269: (dbg) Run:  kubectl --context functional-688409 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-c79vz hello-node-connect-7d85dfc575-whg26
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-688409 describe pod busybox-mount hello-node-75c85bcc94-c79vz hello-node-connect-7d85dfc575-whg26
helpers_test.go:290: (dbg) kubectl --context functional-688409 describe pod busybox-mount hello-node-75c85bcc94-c79vz hello-node-connect-7d85dfc575-whg26:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-688409/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 12:14:09 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://ba4bace5a36db79c35df7446c6767e1aaaa2203f413baecf78af831ad54b8ad7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 19 Oct 2025 12:14:10 +0000
	      Finished:     Sun, 19 Oct 2025 12:14:10 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hr28d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-hr28d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-688409
	  Normal  Pulling    9m51s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m50s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 658ms (658ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m50s  kubelet            Created container: mount-munger
	  Normal  Started    9m50s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-c79vz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-688409/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 12:13:58 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8xf95 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8xf95:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-c79vz to functional-688409
	  Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-whg26
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-688409/192.168.49.2
	Start Time:       Sun, 19 Oct 2025 12:13:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lg8jp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lg8jp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-whg26 to functional-688409
	  Normal   Pulling    7m11s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m1s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-688409 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-688409 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-c79vz" [bed09578-c586-490d-92c7-272ca734a112] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-688409 -n functional-688409
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-19 12:23:58.988320384 +0000 UTC m=+1126.678764234
functional_test.go:1460: (dbg) Run:  kubectl --context functional-688409 describe po hello-node-75c85bcc94-c79vz -n default
functional_test.go:1460: (dbg) kubectl --context functional-688409 describe po hello-node-75c85bcc94-c79vz -n default:
Name:             hello-node-75c85bcc94-c79vz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-688409/192.168.49.2
Start Time:       Sun, 19 Oct 2025 12:13:58 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8xf95 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8xf95:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-c79vz to functional-688409
Normal   Pulling    7m11s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m11s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m11s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-688409 logs hello-node-75c85bcc94-c79vz -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-688409 logs hello-node-75c85bcc94-c79vz -n default: exit status 1 (68.160286ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-c79vz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-688409 logs hello-node-75c85bcc94-c79vz -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image load --daemon kicbase/echo-server:functional-688409 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-688409" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image load --daemon kicbase/echo-server:functional-688409 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-688409 image load --daemon kicbase/echo-server:functional-688409 --alsologtostderr: (1.5847237s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-688409" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
E1019 12:14:24.834249  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-688409
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image load --daemon kicbase/echo-server:functional-688409 --alsologtostderr
2025/10/19 12:14:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-688409" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image save kicbase/echo-server:functional-688409 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1019 12:14:26.905513  394150 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:14:26.905785  394150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:14:26.905796  394150 out.go:374] Setting ErrFile to fd 2...
	I1019 12:14:26.905800  394150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:14:26.905989  394150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:14:26.906506  394150 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:14:26.906598  394150 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:14:26.906958  394150 cli_runner.go:164] Run: docker container inspect functional-688409 --format={{.State.Status}}
	I1019 12:14:26.926855  394150 ssh_runner.go:195] Run: systemctl --version
	I1019 12:14:26.926906  394150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-688409
	I1019 12:14:26.946408  394150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/functional-688409/id_rsa Username:docker}
	I1019 12:14:27.044949  394150 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1019 12:14:27.045013  394150 cache_images.go:254] Failed to load cached images for "functional-688409": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1019 12:14:27.045030  394150 cache_images.go:266] failed pushing to: functional-688409

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-688409
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image save --daemon kicbase/echo-server:functional-688409 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-688409
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-688409: exit status 1 (19.120396ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-688409

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-688409

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 service --namespace=default --https --url hello-node: exit status 115 (524.755634ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32477
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-688409 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 service hello-node --url --format={{.IP}}: exit status 115 (524.911947ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-688409 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 service hello-node --url: exit status 115 (531.604581ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32477
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-688409 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32477
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.32s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-071159 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-071159 --output=json --user=testUser: exit status 80 (2.324670252s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7b8029e6-6f74-4fba-aae6-1048c1d35489","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-071159 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"357628be-e54d-4479-9e63-fac5200aa7e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-19T12:32:08Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"6cb888af-5b08-480e-ab6f-714ac9887a5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-071159 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.32s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.88s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-071159 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-071159 --output=json --user=testUser: exit status 80 (1.879035169s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2f52d271-24ce-4d1b-8c73-64a8cf23aeab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-071159 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"37554424-8cf0-4bae-b018-936ab12a92eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-19T12:32:09Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"ba419656-cb4c-415b-89bf-cd8cc20bebb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-071159 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.88s)

                                                
                                    
x
+
TestPause/serial/Pause (5.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-513789 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-513789 --alsologtostderr -v=5: exit status 80 (2.448429422s)

                                                
                                                
-- stdout --
	* Pausing node pause-513789 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:47:15.102949  569366 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:47:15.103207  569366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:47:15.103218  569366 out.go:374] Setting ErrFile to fd 2...
	I1019 12:47:15.103222  569366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:47:15.103435  569366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:47:15.103685  569366 out.go:368] Setting JSON to false
	I1019 12:47:15.103730  569366 mustload.go:65] Loading cluster: pause-513789
	I1019 12:47:15.104074  569366 config.go:182] Loaded profile config "pause-513789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:15.104466  569366 cli_runner.go:164] Run: docker container inspect pause-513789 --format={{.State.Status}}
	I1019 12:47:15.124417  569366 host.go:66] Checking if "pause-513789" exists ...
	I1019 12:47:15.124699  569366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:47:15.182595  569366 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-19 12:47:15.172206145 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:47:15.183526  569366 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-513789 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 12:47:15.185336  569366 out.go:179] * Pausing node pause-513789 ... 
	I1019 12:47:15.186916  569366 host.go:66] Checking if "pause-513789" exists ...
	I1019 12:47:15.187268  569366 ssh_runner.go:195] Run: systemctl --version
	I1019 12:47:15.187316  569366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:15.206644  569366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/pause-513789/id_rsa Username:docker}
	I1019 12:47:15.301449  569366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:47:15.314866  569366 pause.go:52] kubelet running: true
	I1019 12:47:15.314963  569366 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:47:15.466151  569366 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:47:15.466236  569366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:47:15.531348  569366 cri.go:89] found id: "1ec0fc3c0ba24d3e699dd67f8810fed4621f9c513043af45479a9c5d807702ff"
	I1019 12:47:15.531378  569366 cri.go:89] found id: "14eb9e844205895def56603a546e32a8ab831cb3660f127a0c21e7ebbe546d9d"
	I1019 12:47:15.531384  569366 cri.go:89] found id: "920deebc214e2af14fcd54c5e9f2885245b1e6c033a03100dbc98aff69d1509a"
	I1019 12:47:15.531388  569366 cri.go:89] found id: "0528c1143dc0359455551f53f87509d4b20895517dfd2e448eeb029e9f2cbd59"
	I1019 12:47:15.531392  569366 cri.go:89] found id: "146dd0c10eabe2f6580ce9036e41fb648b6c4762abbd294a65c9313f61ee9197"
	I1019 12:47:15.531396  569366 cri.go:89] found id: "31890535538bc421b95363fbb2b2a58fc25aae9ac690403cf135ef78a607e96d"
	I1019 12:47:15.531400  569366 cri.go:89] found id: "dcc8e66b624ddc6e3f31455e07e4c4d89d8ed87a1176cf3098ab8c6a1a62bb01"
	I1019 12:47:15.531404  569366 cri.go:89] found id: ""
	I1019 12:47:15.531467  569366 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:47:15.543309  569366 retry.go:31] will retry after 219.639099ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:47:15Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:47:15.763662  569366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:47:15.777499  569366 pause.go:52] kubelet running: false
	I1019 12:47:15.777559  569366 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:47:15.887329  569366 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:47:15.887409  569366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:47:15.956498  569366 cri.go:89] found id: "1ec0fc3c0ba24d3e699dd67f8810fed4621f9c513043af45479a9c5d807702ff"
	I1019 12:47:15.956526  569366 cri.go:89] found id: "14eb9e844205895def56603a546e32a8ab831cb3660f127a0c21e7ebbe546d9d"
	I1019 12:47:15.956532  569366 cri.go:89] found id: "920deebc214e2af14fcd54c5e9f2885245b1e6c033a03100dbc98aff69d1509a"
	I1019 12:47:15.956537  569366 cri.go:89] found id: "0528c1143dc0359455551f53f87509d4b20895517dfd2e448eeb029e9f2cbd59"
	I1019 12:47:15.956554  569366 cri.go:89] found id: "146dd0c10eabe2f6580ce9036e41fb648b6c4762abbd294a65c9313f61ee9197"
	I1019 12:47:15.956559  569366 cri.go:89] found id: "31890535538bc421b95363fbb2b2a58fc25aae9ac690403cf135ef78a607e96d"
	I1019 12:47:15.956597  569366 cri.go:89] found id: "dcc8e66b624ddc6e3f31455e07e4c4d89d8ed87a1176cf3098ab8c6a1a62bb01"
	I1019 12:47:15.956606  569366 cri.go:89] found id: ""
	I1019 12:47:15.956649  569366 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:47:15.968486  569366 retry.go:31] will retry after 432.565693ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:47:15Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:47:16.402189  569366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:47:16.416083  569366 pause.go:52] kubelet running: false
	I1019 12:47:16.416153  569366 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:47:16.529253  569366 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:47:16.529339  569366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:47:16.598165  569366 cri.go:89] found id: "1ec0fc3c0ba24d3e699dd67f8810fed4621f9c513043af45479a9c5d807702ff"
	I1019 12:47:16.598186  569366 cri.go:89] found id: "14eb9e844205895def56603a546e32a8ab831cb3660f127a0c21e7ebbe546d9d"
	I1019 12:47:16.598190  569366 cri.go:89] found id: "920deebc214e2af14fcd54c5e9f2885245b1e6c033a03100dbc98aff69d1509a"
	I1019 12:47:16.598193  569366 cri.go:89] found id: "0528c1143dc0359455551f53f87509d4b20895517dfd2e448eeb029e9f2cbd59"
	I1019 12:47:16.598195  569366 cri.go:89] found id: "146dd0c10eabe2f6580ce9036e41fb648b6c4762abbd294a65c9313f61ee9197"
	I1019 12:47:16.598198  569366 cri.go:89] found id: "31890535538bc421b95363fbb2b2a58fc25aae9ac690403cf135ef78a607e96d"
	I1019 12:47:16.598201  569366 cri.go:89] found id: "dcc8e66b624ddc6e3f31455e07e4c4d89d8ed87a1176cf3098ab8c6a1a62bb01"
	I1019 12:47:16.598203  569366 cri.go:89] found id: ""
	I1019 12:47:16.598240  569366 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:47:16.610279  569366 retry.go:31] will retry after 654.419385ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:47:16Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:47:17.264879  569366 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:47:17.278883  569366 pause.go:52] kubelet running: false
	I1019 12:47:17.278940  569366 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:47:17.404362  569366 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:47:17.404472  569366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:47:17.477689  569366 cri.go:89] found id: "1ec0fc3c0ba24d3e699dd67f8810fed4621f9c513043af45479a9c5d807702ff"
	I1019 12:47:17.477723  569366 cri.go:89] found id: "14eb9e844205895def56603a546e32a8ab831cb3660f127a0c21e7ebbe546d9d"
	I1019 12:47:17.477729  569366 cri.go:89] found id: "920deebc214e2af14fcd54c5e9f2885245b1e6c033a03100dbc98aff69d1509a"
	I1019 12:47:17.477733  569366 cri.go:89] found id: "0528c1143dc0359455551f53f87509d4b20895517dfd2e448eeb029e9f2cbd59"
	I1019 12:47:17.477737  569366 cri.go:89] found id: "146dd0c10eabe2f6580ce9036e41fb648b6c4762abbd294a65c9313f61ee9197"
	I1019 12:47:17.477741  569366 cri.go:89] found id: "31890535538bc421b95363fbb2b2a58fc25aae9ac690403cf135ef78a607e96d"
	I1019 12:47:17.477746  569366 cri.go:89] found id: "dcc8e66b624ddc6e3f31455e07e4c4d89d8ed87a1176cf3098ab8c6a1a62bb01"
	I1019 12:47:17.477750  569366 cri.go:89] found id: ""
	I1019 12:47:17.477804  569366 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:47:17.494263  569366 out.go:203] 
	W1019 12:47:17.495517  569366 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:47:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:47:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:47:17.495534  569366 out.go:285] * 
	* 
	W1019 12:47:17.500100  569366 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:47:17.502100  569366 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-513789 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-513789
helpers_test.go:243: (dbg) docker inspect pause-513789:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc",
	        "Created": "2025-10-19T12:46:31.27526377Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 560120,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:46:31.314726475Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc/hosts",
	        "LogPath": "/var/lib/docker/containers/7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc/7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc-json.log",
	        "Name": "/pause-513789",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-513789:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-513789",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc",
	                "LowerDir": "/var/lib/docker/overlay2/58381575d7c83c49728f3369fc7321b73c775694570e55f8d5f099b2f182e349-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58381575d7c83c49728f3369fc7321b73c775694570e55f8d5f099b2f182e349/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58381575d7c83c49728f3369fc7321b73c775694570e55f8d5f099b2f182e349/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58381575d7c83c49728f3369fc7321b73c775694570e55f8d5f099b2f182e349/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-513789",
	                "Source": "/var/lib/docker/volumes/pause-513789/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-513789",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-513789",
	                "name.minikube.sigs.k8s.io": "pause-513789",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0510ac92a2fc2782a6f5954270692bdf4a2b9e635c12be17558ecd4f3306ab22",
	            "SandboxKey": "/var/run/docker/netns/0510ac92a2fc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-513789": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:5e:90:d4:37:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5bcb1162b0f864bd39e0ae8f3ebf42dd06eacb92bce754ec3ed5c0330e43511e",
	                    "EndpointID": "76820c09cacde9767d087d622dbfc1e8176aa1e2bada325b06f72216381c38b8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-513789",
	                        "7b2509a4aec9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-513789 -n pause-513789
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-513789 -n pause-513789: exit status 2 (326.761035ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-513789 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-931932 sudo systemctl cat cri-docker --no-pager                                                     │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo cat /usr/lib/systemd/system/cri-docker.service                                          │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo cri-dockerd --version                                                                   │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo systemctl status containerd --all --full --no-pager                                     │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo systemctl cat containerd --no-pager                                                     │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo cat /lib/systemd/system/containerd.service                                              │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo cat /etc/containerd/config.toml                                                         │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo containerd config dump                                                                  │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo systemctl status crio --all --full --no-pager                                           │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo systemctl cat crio --no-pager                                                           │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                 │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo crio config                                                                             │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ delete  │ -p cilium-931932                                                                                              │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:46 UTC │
	│ start   │ -p running-upgrade-188277 --memory=3072 --vm-driver=docker  --container-runtime=crio                          │ running-upgrade-188277 │ jenkins │ v1.32.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:46 UTC │
	│ ssh     │ cert-options-868990 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                   │ cert-options-868990    │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:46 UTC │
	│ ssh     │ -p cert-options-868990 -- sudo cat /etc/kubernetes/admin.conf                                                 │ cert-options-868990    │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:46 UTC │
	│ delete  │ -p cert-options-868990                                                                                        │ cert-options-868990    │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:46 UTC │
	│ start   │ -p pause-513789 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio     │ pause-513789           │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:47 UTC │
	│ start   │ -p running-upgrade-188277 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio      │ running-upgrade-188277 │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:47 UTC │
	│ delete  │ -p running-upgrade-188277                                                                                     │ running-upgrade-188277 │ jenkins │ v1.37.0 │ 19 Oct 25 12:47 UTC │ 19 Oct 25 12:47 UTC │
	│ start   │ -p pause-513789 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                              │ pause-513789           │ jenkins │ v1.37.0 │ 19 Oct 25 12:47 UTC │ 19 Oct 25 12:47 UTC │
	│ start   │ -p NoKubernetes-352361 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio │ NoKubernetes-352361    │ jenkins │ v1.37.0 │ 19 Oct 25 12:47 UTC │                     │
	│ start   │ -p NoKubernetes-352361 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio         │ NoKubernetes-352361    │ jenkins │ v1.37.0 │ 19 Oct 25 12:47 UTC │                     │
	│ pause   │ -p pause-513789 --alsologtostderr -v=5                                                                        │ pause-513789           │ jenkins │ v1.37.0 │ 19 Oct 25 12:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:47:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:47:07.484915  567019 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:47:07.485226  567019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:47:07.485237  567019 out.go:374] Setting ErrFile to fd 2...
	I1019 12:47:07.485242  567019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:47:07.485414  567019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:47:07.485850  567019 out.go:368] Setting JSON to false
	I1019 12:47:07.487028  567019 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8975,"bootTime":1760869052,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:47:07.487124  567019 start.go:141] virtualization: kvm guest
	I1019 12:47:07.489668  567019 out.go:179] * [NoKubernetes-352361] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:47:07.491101  567019 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:47:07.491138  567019 notify.go:220] Checking for updates...
	I1019 12:47:07.493630  567019 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:47:07.495523  567019 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:47:07.496593  567019 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:47:07.497693  567019 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:47:07.498821  567019 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:47:07.500619  567019 config.go:182] Loaded profile config "cert-expiration-599351": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:07.500759  567019 config.go:182] Loaded profile config "kubernetes-upgrade-566686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:07.500931  567019 config.go:182] Loaded profile config "pause-513789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:07.501050  567019 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:47:07.525532  567019 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:47:07.525675  567019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:47:07.584951  567019 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 12:47:07.57298267 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:47:07.585062  567019 docker.go:318] overlay module found
	I1019 12:47:07.586558  567019 out.go:179] * Using the docker driver based on user configuration
	I1019 12:47:07.587609  567019 start.go:305] selected driver: docker
	I1019 12:47:07.587625  567019 start.go:925] validating driver "docker" against <nil>
	I1019 12:47:07.587637  567019 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:47:07.588202  567019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:47:07.647506  567019 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 12:47:07.636405307 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:47:07.647708  567019 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:47:07.647914  567019 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 12:47:07.649258  567019 out.go:179] * Using Docker driver with root privileges
	I1019 12:47:07.650230  567019 cni.go:84] Creating CNI manager for ""
	I1019 12:47:07.650307  567019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:47:07.650325  567019 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:47:07.650406  567019 start.go:349] cluster config:
	{Name:NoKubernetes-352361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:NoKubernetes-352361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:47:07.651780  567019 out.go:179] * Starting "NoKubernetes-352361" primary control-plane node in "NoKubernetes-352361" cluster
	I1019 12:47:07.653083  567019 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:47:07.654549  567019 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:47:07.655686  567019 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:47:07.655807  567019 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:47:07.655836  567019 cache.go:58] Caching tarball of preloaded images
	I1019 12:47:07.655729  567019 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:47:07.655968  567019 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:47:07.655983  567019 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:47:07.656132  567019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/NoKubernetes-352361/config.json ...
	I1019 12:47:07.656158  567019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/NoKubernetes-352361/config.json: {Name:mk0b5a2ed7872728a1688c82c2fcbe2b071deb74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:47:07.677788  567019 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:47:07.677816  567019 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:47:07.677836  567019 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:47:07.677867  567019 start.go:360] acquireMachinesLock for NoKubernetes-352361: {Name:mkcfbda9f21f0534f21846ed1fab72e95ee68b31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:47:07.677973  567019 start.go:364] duration metric: took 86.06µs to acquireMachinesLock for "NoKubernetes-352361"
	I1019 12:47:07.678003  567019 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-352361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:NoKubernetes-352361 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:47:07.678071  567019 start.go:125] createHost starting for "" (driver="docker")
	I1019 12:47:05.924501  534438 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:47:05.925021  534438 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 12:47:05.925086  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 12:47:05.925134  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 12:47:05.951922  534438 cri.go:89] found id: "c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9"
	I1019 12:47:05.951940  534438 cri.go:89] found id: ""
	I1019 12:47:05.951961  534438 logs.go:282] 1 containers: [c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9]
	I1019 12:47:05.952010  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:05.956095  534438 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 12:47:05.956172  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 12:47:05.983137  534438 cri.go:89] found id: ""
	I1019 12:47:05.983168  534438 logs.go:282] 0 containers: []
	W1019 12:47:05.983179  534438 logs.go:284] No container was found matching "etcd"
	I1019 12:47:05.983188  534438 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 12:47:05.983252  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 12:47:06.010380  534438 cri.go:89] found id: ""
	I1019 12:47:06.010408  534438 logs.go:282] 0 containers: []
	W1019 12:47:06.010431  534438 logs.go:284] No container was found matching "coredns"
	I1019 12:47:06.010441  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 12:47:06.010507  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 12:47:06.037530  534438 cri.go:89] found id: "f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa"
	I1019 12:47:06.037555  534438 cri.go:89] found id: ""
	I1019 12:47:06.037566  534438 logs.go:282] 1 containers: [f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa]
	I1019 12:47:06.037639  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:06.041918  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 12:47:06.041992  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 12:47:06.070033  534438 cri.go:89] found id: ""
	I1019 12:47:06.070069  534438 logs.go:282] 0 containers: []
	W1019 12:47:06.070080  534438 logs.go:284] No container was found matching "kube-proxy"
	I1019 12:47:06.070088  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 12:47:06.070137  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 12:47:06.096479  534438 cri.go:89] found id: "4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e"
	I1019 12:47:06.096501  534438 cri.go:89] found id: ""
	I1019 12:47:06.096509  534438 logs.go:282] 1 containers: [4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e]
	I1019 12:47:06.096556  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:06.100487  534438 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 12:47:06.100569  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 12:47:06.127501  534438 cri.go:89] found id: ""
	I1019 12:47:06.127526  534438 logs.go:282] 0 containers: []
	W1019 12:47:06.127534  534438 logs.go:284] No container was found matching "kindnet"
	I1019 12:47:06.127542  534438 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 12:47:06.127600  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 12:47:06.155139  534438 cri.go:89] found id: ""
	I1019 12:47:06.155175  534438 logs.go:282] 0 containers: []
	W1019 12:47:06.155186  534438 logs.go:284] No container was found matching "storage-provisioner"
	I1019 12:47:06.155198  534438 logs.go:123] Gathering logs for CRI-O ...
	I1019 12:47:06.155214  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 12:47:06.199723  534438 logs.go:123] Gathering logs for container status ...
	I1019 12:47:06.199759  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 12:47:06.230366  534438 logs.go:123] Gathering logs for kubelet ...
	I1019 12:47:06.230403  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 12:47:06.306133  534438 logs.go:123] Gathering logs for dmesg ...
	I1019 12:47:06.306170  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 12:47:06.323555  534438 logs.go:123] Gathering logs for describe nodes ...
	I1019 12:47:06.323586  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 12:47:06.383512  534438 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 12:47:06.383550  534438 logs.go:123] Gathering logs for kube-apiserver [c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9] ...
	I1019 12:47:06.383568  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9"
	I1019 12:47:06.419804  534438 logs.go:123] Gathering logs for kube-scheduler [f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa] ...
	I1019 12:47:06.419838  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa"
	I1019 12:47:06.471511  534438 logs.go:123] Gathering logs for kube-controller-manager [4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e] ...
	I1019 12:47:06.471544  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e"
	I1019 12:47:09.002502  534438 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:47:09.003005  534438 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 12:47:09.003075  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 12:47:09.003134  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 12:47:09.033936  534438 cri.go:89] found id: "c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9"
	I1019 12:47:09.033962  534438 cri.go:89] found id: ""
	I1019 12:47:09.033974  534438 logs.go:282] 1 containers: [c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9]
	I1019 12:47:09.034038  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:09.038268  534438 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 12:47:09.038346  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 12:47:09.069656  534438 cri.go:89] found id: ""
	I1019 12:47:09.069687  534438 logs.go:282] 0 containers: []
	W1019 12:47:09.069698  534438 logs.go:284] No container was found matching "etcd"
	I1019 12:47:09.069707  534438 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 12:47:09.069768  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 12:47:09.100969  534438 cri.go:89] found id: ""
	I1019 12:47:09.100994  534438 logs.go:282] 0 containers: []
	W1019 12:47:09.101002  534438 logs.go:284] No container was found matching "coredns"
	I1019 12:47:09.101008  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 12:47:09.101065  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 12:47:09.129927  534438 cri.go:89] found id: "f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa"
	I1019 12:47:09.129955  534438 cri.go:89] found id: ""
	I1019 12:47:09.129966  534438 logs.go:282] 1 containers: [f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa]
	I1019 12:47:09.130030  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:09.134502  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 12:47:09.134570  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 12:47:09.165215  534438 cri.go:89] found id: ""
	I1019 12:47:09.165245  534438 logs.go:282] 0 containers: []
	W1019 12:47:09.165257  534438 logs.go:284] No container was found matching "kube-proxy"
	I1019 12:47:09.165265  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 12:47:09.165339  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 12:47:09.197250  534438 cri.go:89] found id: "4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e"
	I1019 12:47:09.197270  534438 cri.go:89] found id: ""
	I1019 12:47:09.197278  534438 logs.go:282] 1 containers: [4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e]
	I1019 12:47:09.197326  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:09.201879  534438 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 12:47:09.201949  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 12:47:09.228702  534438 cri.go:89] found id: ""
	I1019 12:47:09.228731  534438 logs.go:282] 0 containers: []
	W1019 12:47:09.228739  534438 logs.go:284] No container was found matching "kindnet"
	I1019 12:47:09.228745  534438 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 12:47:09.228806  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 12:47:09.258347  534438 cri.go:89] found id: ""
	I1019 12:47:09.258379  534438 logs.go:282] 0 containers: []
	W1019 12:47:09.258390  534438 logs.go:284] No container was found matching "storage-provisioner"
	I1019 12:47:09.258403  534438 logs.go:123] Gathering logs for container status ...
	I1019 12:47:09.258417  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 12:47:09.290203  534438 logs.go:123] Gathering logs for kubelet ...
	I1019 12:47:09.290241  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 12:47:09.369505  534438 logs.go:123] Gathering logs for dmesg ...
	I1019 12:47:09.369539  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 12:47:09.387462  534438 logs.go:123] Gathering logs for describe nodes ...
	I1019 12:47:09.387499  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 12:47:09.446836  534438 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 12:47:09.446870  534438 logs.go:123] Gathering logs for kube-apiserver [c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9] ...
	I1019 12:47:09.446896  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9"
	I1019 12:47:09.482628  534438 logs.go:123] Gathering logs for kube-scheduler [f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa] ...
	I1019 12:47:09.482663  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa"
	I1019 12:47:09.532862  534438 logs.go:123] Gathering logs for kube-controller-manager [4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e] ...
	I1019 12:47:09.532899  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e"
	I1019 12:47:09.560404  534438 logs.go:123] Gathering logs for CRI-O ...
	I1019 12:47:09.560467  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 12:47:07.414386  566789 out.go:252] * Updating the running docker "pause-513789" container ...
	I1019 12:47:07.414446  566789 machine.go:93] provisionDockerMachine start ...
	I1019 12:47:07.414523  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:07.434363  566789 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:07.434712  566789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1019 12:47:07.434737  566789 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:47:07.574984  566789 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-513789
	
	I1019 12:47:07.575015  566789 ubuntu.go:182] provisioning hostname "pause-513789"
	I1019 12:47:07.575090  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:07.595810  566789 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:07.596100  566789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1019 12:47:07.596120  566789 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-513789 && echo "pause-513789" | sudo tee /etc/hostname
	I1019 12:47:07.747222  566789 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-513789
	
	I1019 12:47:07.747310  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:07.769233  566789 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:07.769578  566789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1019 12:47:07.769610  566789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-513789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-513789/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-513789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:47:07.908184  566789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:47:07.908218  566789 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:47:07.908267  566789 ubuntu.go:190] setting up certificates
	I1019 12:47:07.908282  566789 provision.go:84] configureAuth start
	I1019 12:47:07.908344  566789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-513789
	I1019 12:47:07.927117  566789 provision.go:143] copyHostCerts
	I1019 12:47:07.927194  566789 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:47:07.927218  566789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:47:07.927297  566789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:47:07.927447  566789 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:47:07.927463  566789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:47:07.927512  566789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:47:07.927632  566789 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:47:07.927646  566789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:47:07.927689  566789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:47:07.927804  566789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.pause-513789 san=[127.0.0.1 192.168.85.2 localhost minikube pause-513789]
	I1019 12:47:08.376872  566789 provision.go:177] copyRemoteCerts
	I1019 12:47:08.376941  566789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:47:08.377008  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:08.402285  566789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/pause-513789/id_rsa Username:docker}
	I1019 12:47:08.502000  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:47:08.520851  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 12:47:08.539444  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 12:47:08.557879  566789 provision.go:87] duration metric: took 649.574789ms to configureAuth
	I1019 12:47:08.557916  566789 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:47:08.558109  566789 config.go:182] Loaded profile config "pause-513789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:08.558207  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:08.576409  566789 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:08.576680  566789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1019 12:47:08.576701  566789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:47:10.261815  566789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:47:10.261854  566789 machine.go:96] duration metric: took 2.847398039s to provisionDockerMachine
	I1019 12:47:10.261870  566789 start.go:293] postStartSetup for "pause-513789" (driver="docker")
	I1019 12:47:10.261884  566789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:47:10.261953  566789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:47:10.261991  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:10.282120  566789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/pause-513789/id_rsa Username:docker}
	I1019 12:47:10.380332  566789 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:47:10.383990  566789 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:47:10.384013  566789 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:47:10.384023  566789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:47:10.384069  566789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:47:10.384136  566789 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:47:10.384230  566789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:47:10.392056  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:47:10.410746  566789 start.go:296] duration metric: took 148.855428ms for postStartSetup
	I1019 12:47:10.410845  566789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:47:10.410925  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:10.429328  566789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/pause-513789/id_rsa Username:docker}
	I1019 12:47:10.526008  566789 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:47:10.530892  566789 fix.go:56] duration metric: took 3.1381217s for fixHost
	I1019 12:47:10.530920  566789 start.go:83] releasing machines lock for "pause-513789", held for 3.138165877s
	I1019 12:47:10.530995  566789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-513789
	I1019 12:47:10.549040  566789 ssh_runner.go:195] Run: cat /version.json
	I1019 12:47:10.549082  566789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:47:10.549098  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:10.549152  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:10.568410  566789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/pause-513789/id_rsa Username:docker}
	I1019 12:47:10.569732  566789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/pause-513789/id_rsa Username:docker}
	I1019 12:47:10.661892  566789 ssh_runner.go:195] Run: systemctl --version
	I1019 12:47:10.721662  566789 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:47:10.760838  566789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:47:10.766051  566789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:47:10.766129  566789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:47:10.774906  566789 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:47:10.774930  566789 start.go:495] detecting cgroup driver to use...
	I1019 12:47:10.774960  566789 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:47:10.775002  566789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:47:10.792972  566789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:47:10.807784  566789 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:47:10.807842  566789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:47:10.826570  566789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:47:10.840818  566789 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:47:10.970500  566789 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:47:11.085609  566789 docker.go:234] disabling docker service ...
	I1019 12:47:11.085712  566789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:47:11.100354  566789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:47:11.113435  566789 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:47:11.226108  566789 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:47:11.336125  566789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:47:11.349166  566789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:47:11.369639  566789 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:47:11.369711  566789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.389691  566789 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:47:11.389763  566789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.415070  566789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.506658  566789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.636678  566789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:47:11.645714  566789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.654789  566789 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.663512  566789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.702762  566789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:47:11.711000  566789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:47:11.719065  566789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:47:11.844451  566789 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:47:12.001172  566789 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:47:12.001248  566789 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:47:12.005837  566789 start.go:563] Will wait 60s for crictl version
	I1019 12:47:12.005906  566789 ssh_runner.go:195] Run: which crictl
	I1019 12:47:12.010117  566789 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:47:12.036191  566789 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:47:12.036275  566789 ssh_runner.go:195] Run: crio --version
	I1019 12:47:12.067753  566789 ssh_runner.go:195] Run: crio --version
	I1019 12:47:12.105233  566789 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:47:12.107543  566789 cli_runner.go:164] Run: docker network inspect pause-513789 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:47:12.129986  566789 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 12:47:12.135113  566789 kubeadm.go:883] updating cluster {Name:pause-513789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-513789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:47:12.135382  566789 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:47:12.135558  566789 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:47:12.172632  566789 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:47:12.172663  566789 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:47:12.172732  566789 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:47:07.679856  567019 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 12:47:07.680109  567019 start.go:159] libmachine.API.Create for "NoKubernetes-352361" (driver="docker")
	I1019 12:47:07.680147  567019 client.go:168] LocalClient.Create starting
	I1019 12:47:07.680217  567019 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem
	I1019 12:47:07.680261  567019 main.go:141] libmachine: Decoding PEM data...
	I1019 12:47:07.680285  567019 main.go:141] libmachine: Parsing certificate...
	I1019 12:47:07.680364  567019 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem
	I1019 12:47:07.680396  567019 main.go:141] libmachine: Decoding PEM data...
	I1019 12:47:07.680464  567019 main.go:141] libmachine: Parsing certificate...
	I1019 12:47:07.680812  567019 cli_runner.go:164] Run: docker network inspect NoKubernetes-352361 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:47:07.698385  567019 cli_runner.go:211] docker network inspect NoKubernetes-352361 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:47:07.698476  567019 network_create.go:284] running [docker network inspect NoKubernetes-352361] to gather additional debugging logs...
	I1019 12:47:07.698503  567019 cli_runner.go:164] Run: docker network inspect NoKubernetes-352361
	W1019 12:47:07.716119  567019 cli_runner.go:211] docker network inspect NoKubernetes-352361 returned with exit code 1
	I1019 12:47:07.716148  567019 network_create.go:287] error running [docker network inspect NoKubernetes-352361]: docker network inspect NoKubernetes-352361: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-352361 not found
	I1019 12:47:07.716171  567019 network_create.go:289] output of [docker network inspect NoKubernetes-352361]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-352361 not found
	
	** /stderr **
	I1019 12:47:07.716311  567019 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:47:07.734598  567019 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a4629926c406 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:8c:3f:62:13:f6} reservation:<nil>}
	I1019 12:47:07.735404  567019 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6cccd776798e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:1b:39:ab:6e:7b} reservation:<nil>}
	I1019 12:47:07.735903  567019 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-91914a6ce07e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:1c:aa:a8:a4:4a} reservation:<nil>}
	I1019 12:47:07.736488  567019 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7bfed117f373 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:eb:6e:51:bc:90} reservation:<nil>}
	I1019 12:47:07.737236  567019 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5bcb1162b0f8 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:46:10:30:a3:e7:95} reservation:<nil>}
	I1019 12:47:07.738123  567019 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f8aca0}
	I1019 12:47:07.738149  567019 network_create.go:124] attempt to create docker network NoKubernetes-352361 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1019 12:47:07.738190  567019 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-352361 NoKubernetes-352361
	I1019 12:47:07.802676  567019 network_create.go:108] docker network NoKubernetes-352361 192.168.94.0/24 created
	I1019 12:47:07.802704  567019 kic.go:121] calculated static IP "192.168.94.2" for the "NoKubernetes-352361" container
	I1019 12:47:07.802773  567019 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:47:07.823795  567019 cli_runner.go:164] Run: docker volume create NoKubernetes-352361 --label name.minikube.sigs.k8s.io=NoKubernetes-352361 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:47:07.844297  567019 oci.go:103] Successfully created a docker volume NoKubernetes-352361
	I1019 12:47:07.844378  567019 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-352361-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-352361 --entrypoint /usr/bin/test -v NoKubernetes-352361:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:47:08.262315  567019 oci.go:107] Successfully prepared a docker volume NoKubernetes-352361
	I1019 12:47:08.262362  567019 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:47:08.262382  567019 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:47:08.262459  567019 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v NoKubernetes-352361:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 12:47:11.724605  567019 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v NoKubernetes-352361:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.462099757s)
	I1019 12:47:11.724638  567019 kic.go:203] duration metric: took 3.46225224s to extract preloaded images to volume ...
	W1019 12:47:11.724737  567019 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 12:47:11.724779  567019 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 12:47:11.724831  567019 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 12:47:11.796510  567019 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-352361 --name NoKubernetes-352361 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-352361 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-352361 --network NoKubernetes-352361 --ip 192.168.94.2 --volume NoKubernetes-352361:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 12:47:12.075884  567019 cli_runner.go:164] Run: docker container inspect NoKubernetes-352361 --format={{.State.Running}}
	I1019 12:47:12.096039  567019 cli_runner.go:164] Run: docker container inspect NoKubernetes-352361 --format={{.State.Status}}
	I1019 12:47:12.117338  567019 cli_runner.go:164] Run: docker exec NoKubernetes-352361 stat /var/lib/dpkg/alternatives/iptables
	I1019 12:47:12.170507  567019 oci.go:144] the created container "NoKubernetes-352361" has a running status.
	I1019 12:47:12.170549  567019 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa...
	I1019 12:47:12.456981  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1019 12:47:12.457038  567019 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 12:47:12.482401  567019 cli_runner.go:164] Run: docker container inspect NoKubernetes-352361 --format={{.State.Status}}
	I1019 12:47:12.203834  566789 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:47:12.203864  566789 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:47:12.203875  566789 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 12:47:12.204074  566789 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-513789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-513789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:47:12.204166  566789 ssh_runner.go:195] Run: crio config
	I1019 12:47:12.274357  566789 cni.go:84] Creating CNI manager for ""
	I1019 12:47:12.274385  566789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:47:12.274404  566789 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:47:12.274539  566789 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-513789 NodeName:pause-513789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:47:12.274731  566789 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-513789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:47:12.274831  566789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:47:12.284815  566789 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:47:12.284901  566789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:47:12.295307  566789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1019 12:47:12.313190  566789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:47:12.335687  566789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1019 12:47:12.350616  566789 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:47:12.355596  566789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:47:12.479050  566789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:47:12.495415  566789 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789 for IP: 192.168.85.2
	I1019 12:47:12.495457  566789 certs.go:195] generating shared ca certs ...
	I1019 12:47:12.495480  566789 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:47:12.495650  566789 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:47:12.495740  566789 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:47:12.495761  566789 certs.go:257] generating profile certs ...
	I1019 12:47:12.495868  566789 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/client.key
	I1019 12:47:12.495945  566789 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/apiserver.key.18d09e63
	I1019 12:47:12.495993  566789 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/proxy-client.key
	I1019 12:47:12.496122  566789 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:47:12.496165  566789 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:47:12.496181  566789 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:47:12.496211  566789 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:47:12.496244  566789 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:47:12.496270  566789 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:47:12.496320  566789 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:47:12.497223  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:47:12.520269  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:47:12.541028  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:47:12.562203  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:47:12.581210  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 12:47:12.600406  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:47:12.618473  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:47:12.636034  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:47:12.654005  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:47:12.671893  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:47:12.690891  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:47:12.709109  566789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:47:12.722449  566789 ssh_runner.go:195] Run: openssl version
	I1019 12:47:12.728729  566789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:47:12.738028  566789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:47:12.742543  566789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:47:12.742618  566789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:47:12.778375  566789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:47:12.786735  566789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:47:12.795392  566789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:47:12.799063  566789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:47:12.799116  566789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:47:12.835223  566789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:47:12.843760  566789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:47:12.852455  566789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:47:12.856175  566789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:47:12.856232  566789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:47:12.890079  566789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:47:12.898515  566789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:47:12.902398  566789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:47:12.936853  566789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:47:12.973228  566789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:47:13.009486  566789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:47:13.045931  566789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:47:13.082906  566789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:47:13.118070  566789 kubeadm.go:400] StartCluster: {Name:pause-513789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-513789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:47:13.118191  566789 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:47:13.118255  566789 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:47:13.146189  566789 cri.go:89] found id: "1ec0fc3c0ba24d3e699dd67f8810fed4621f9c513043af45479a9c5d807702ff"
	I1019 12:47:13.146210  566789 cri.go:89] found id: "14eb9e844205895def56603a546e32a8ab831cb3660f127a0c21e7ebbe546d9d"
	I1019 12:47:13.146214  566789 cri.go:89] found id: "920deebc214e2af14fcd54c5e9f2885245b1e6c033a03100dbc98aff69d1509a"
	I1019 12:47:13.146217  566789 cri.go:89] found id: "0528c1143dc0359455551f53f87509d4b20895517dfd2e448eeb029e9f2cbd59"
	I1019 12:47:13.146219  566789 cri.go:89] found id: "146dd0c10eabe2f6580ce9036e41fb648b6c4762abbd294a65c9313f61ee9197"
	I1019 12:47:13.146222  566789 cri.go:89] found id: "31890535538bc421b95363fbb2b2a58fc25aae9ac690403cf135ef78a607e96d"
	I1019 12:47:13.146224  566789 cri.go:89] found id: "dcc8e66b624ddc6e3f31455e07e4c4d89d8ed87a1176cf3098ab8c6a1a62bb01"
	I1019 12:47:13.146226  566789 cri.go:89] found id: ""
	I1019 12:47:13.146264  566789 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:47:13.157896  566789 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:47:13Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:47:13.157981  566789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:47:13.165644  566789 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:47:13.165663  566789 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:47:13.165700  566789 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:47:13.173040  566789 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:47:13.173786  566789 kubeconfig.go:125] found "pause-513789" server: "https://192.168.85.2:8443"
	I1019 12:47:13.174669  566789 kapi.go:59] client config for pause-513789: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/client.key", CAFile:"/home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 12:47:13.175076  566789 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1019 12:47:13.175090  566789 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1019 12:47:13.175095  566789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1019 12:47:13.175099  566789 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1019 12:47:13.175103  566789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1019 12:47:13.175449  566789 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:47:13.182686  566789 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 12:47:13.182719  566789 kubeadm.go:601] duration metric: took 17.05007ms to restartPrimaryControlPlane
	I1019 12:47:13.182731  566789 kubeadm.go:402] duration metric: took 64.677075ms to StartCluster
	I1019 12:47:13.182749  566789 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:47:13.182809  566789 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:47:13.183683  566789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:47:13.183892  566789 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:47:13.183956  566789 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:47:13.184106  566789 config.go:182] Loaded profile config "pause-513789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:13.186774  566789 out.go:179] * Verifying Kubernetes components...
	I1019 12:47:13.186779  566789 out.go:179] * Enabled addons: 
	I1019 12:47:12.106818  534438 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:47:12.107267  534438 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 12:47:12.107322  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 12:47:12.107380  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 12:47:12.141993  534438 cri.go:89] found id: "c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9"
	I1019 12:47:12.142020  534438 cri.go:89] found id: ""
	I1019 12:47:12.142031  534438 logs.go:282] 1 containers: [c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9]
	I1019 12:47:12.142091  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:12.146949  534438 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 12:47:12.147024  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 12:47:12.177373  534438 cri.go:89] found id: ""
	I1019 12:47:12.177399  534438 logs.go:282] 0 containers: []
	W1019 12:47:12.177409  534438 logs.go:284] No container was found matching "etcd"
	I1019 12:47:12.177417  534438 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 12:47:12.177481  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 12:47:12.209458  534438 cri.go:89] found id: ""
	I1019 12:47:12.209486  534438 logs.go:282] 0 containers: []
	W1019 12:47:12.209498  534438 logs.go:284] No container was found matching "coredns"
	I1019 12:47:12.209507  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 12:47:12.209578  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 12:47:12.246133  534438 cri.go:89] found id: "f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa"
	I1019 12:47:12.246160  534438 cri.go:89] found id: ""
	I1019 12:47:12.246172  534438 logs.go:282] 1 containers: [f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa]
	I1019 12:47:12.246234  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:12.250135  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 12:47:12.250202  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 12:47:12.279862  534438 cri.go:89] found id: ""
	I1019 12:47:12.279888  534438 logs.go:282] 0 containers: []
	W1019 12:47:12.279899  534438 logs.go:284] No container was found matching "kube-proxy"
	I1019 12:47:12.279919  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 12:47:12.279978  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 12:47:12.316082  534438 cri.go:89] found id: "4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e"
	I1019 12:47:12.316155  534438 cri.go:89] found id: ""
	I1019 12:47:12.316183  534438 logs.go:282] 1 containers: [4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e]
	I1019 12:47:12.316284  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:12.321372  534438 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 12:47:12.321523  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 12:47:12.355519  534438 cri.go:89] found id: ""
	I1019 12:47:12.355554  534438 logs.go:282] 0 containers: []
	W1019 12:47:12.355564  534438 logs.go:284] No container was found matching "kindnet"
	I1019 12:47:12.355572  534438 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 12:47:12.355626  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 12:47:12.387412  534438 cri.go:89] found id: ""
	I1019 12:47:12.387457  534438 logs.go:282] 0 containers: []
	W1019 12:47:12.387474  534438 logs.go:284] No container was found matching "storage-provisioner"
	I1019 12:47:12.387490  534438 logs.go:123] Gathering logs for kube-controller-manager [4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e] ...
	I1019 12:47:12.387510  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e"
	I1019 12:47:12.421884  534438 logs.go:123] Gathering logs for CRI-O ...
	I1019 12:47:12.421923  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 12:47:12.485178  534438 logs.go:123] Gathering logs for container status ...
	I1019 12:47:12.485209  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 12:47:12.521745  534438 logs.go:123] Gathering logs for kubelet ...
	I1019 12:47:12.521775  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 12:47:12.610389  534438 logs.go:123] Gathering logs for dmesg ...
	I1019 12:47:12.610430  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 12:47:12.626926  534438 logs.go:123] Gathering logs for describe nodes ...
	I1019 12:47:12.626953  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 12:47:12.683646  534438 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 12:47:12.683682  534438 logs.go:123] Gathering logs for kube-apiserver [c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9] ...
	I1019 12:47:12.683701  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9"
	I1019 12:47:12.717714  534438 logs.go:123] Gathering logs for kube-scheduler [f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa] ...
	I1019 12:47:12.717743  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa"
	I1019 12:47:13.187859  566789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:47:13.187877  566789 addons.go:514] duration metric: took 3.925144ms for enable addons: enabled=[]
	I1019 12:47:13.301401  566789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:47:13.315570  566789 node_ready.go:35] waiting up to 6m0s for node "pause-513789" to be "Ready" ...
	I1019 12:47:13.324023  566789 node_ready.go:49] node "pause-513789" is "Ready"
	I1019 12:47:13.324056  566789 node_ready.go:38] duration metric: took 8.446932ms for node "pause-513789" to be "Ready" ...
	I1019 12:47:13.324074  566789 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:47:13.324126  566789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:47:13.338608  566789 api_server.go:72] duration metric: took 154.683606ms to wait for apiserver process to appear ...
	I1019 12:47:13.338648  566789 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:47:13.338675  566789 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 12:47:13.345028  566789 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 12:47:13.346309  566789 api_server.go:141] control plane version: v1.34.1
	I1019 12:47:13.346338  566789 api_server.go:131] duration metric: took 7.681294ms to wait for apiserver health ...
	I1019 12:47:13.346350  566789 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:47:13.351081  566789 system_pods.go:59] 7 kube-system pods found
	I1019 12:47:13.351113  566789 system_pods.go:61] "coredns-66bc5c9577-7zzkk" [ef0f8e6f-65f2-4fde-8175-2b4225113317] Running
	I1019 12:47:13.351121  566789 system_pods.go:61] "etcd-pause-513789" [2cc0d169-04c8-4fac-95c0-7b1a16495b1d] Running
	I1019 12:47:13.351126  566789 system_pods.go:61] "kindnet-ndk9h" [5e8161fd-e69c-49f1-8f05-35afc347c891] Running
	I1019 12:47:13.351132  566789 system_pods.go:61] "kube-apiserver-pause-513789" [1e828fa7-bbcd-4a8a-85aa-056fdf001c86] Running
	I1019 12:47:13.351138  566789 system_pods.go:61] "kube-controller-manager-pause-513789" [7b65f53e-f118-47a6-a06a-99d2f76f98f1] Running
	I1019 12:47:13.351143  566789 system_pods.go:61] "kube-proxy-nf888" [09ff1219-4f13-459f-b8a7-1296f69a528a] Running
	I1019 12:47:13.351147  566789 system_pods.go:61] "kube-scheduler-pause-513789" [02076936-95b4-464d-892a-11d38e0e1bb3] Running
	I1019 12:47:13.351156  566789 system_pods.go:74] duration metric: took 4.797542ms to wait for pod list to return data ...
	I1019 12:47:13.351186  566789 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:47:13.353457  566789 default_sa.go:45] found service account: "default"
	I1019 12:47:13.353480  566789 default_sa.go:55] duration metric: took 2.281714ms for default service account to be created ...
	I1019 12:47:13.353492  566789 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:47:13.357137  566789 system_pods.go:86] 7 kube-system pods found
	I1019 12:47:13.357164  566789 system_pods.go:89] "coredns-66bc5c9577-7zzkk" [ef0f8e6f-65f2-4fde-8175-2b4225113317] Running
	I1019 12:47:13.357171  566789 system_pods.go:89] "etcd-pause-513789" [2cc0d169-04c8-4fac-95c0-7b1a16495b1d] Running
	I1019 12:47:13.357183  566789 system_pods.go:89] "kindnet-ndk9h" [5e8161fd-e69c-49f1-8f05-35afc347c891] Running
	I1019 12:47:13.357189  566789 system_pods.go:89] "kube-apiserver-pause-513789" [1e828fa7-bbcd-4a8a-85aa-056fdf001c86] Running
	I1019 12:47:13.357195  566789 system_pods.go:89] "kube-controller-manager-pause-513789" [7b65f53e-f118-47a6-a06a-99d2f76f98f1] Running
	I1019 12:47:13.357200  566789 system_pods.go:89] "kube-proxy-nf888" [09ff1219-4f13-459f-b8a7-1296f69a528a] Running
	I1019 12:47:13.357206  566789 system_pods.go:89] "kube-scheduler-pause-513789" [02076936-95b4-464d-892a-11d38e0e1bb3] Running
	I1019 12:47:13.357219  566789 system_pods.go:126] duration metric: took 3.720277ms to wait for k8s-apps to be running ...
	I1019 12:47:13.357233  566789 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:47:13.357286  566789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:47:13.373599  566789 system_svc.go:56] duration metric: took 16.34003ms WaitForService to wait for kubelet
	I1019 12:47:13.373633  566789 kubeadm.go:586] duration metric: took 189.715191ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:47:13.373655  566789 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:47:13.376826  566789 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:47:13.376855  566789 node_conditions.go:123] node cpu capacity is 8
	I1019 12:47:13.376871  566789 node_conditions.go:105] duration metric: took 3.210526ms to run NodePressure ...
	I1019 12:47:13.376886  566789 start.go:241] waiting for startup goroutines ...
	I1019 12:47:13.376895  566789 start.go:246] waiting for cluster config update ...
	I1019 12:47:13.376907  566789 start.go:255] writing updated cluster config ...
	I1019 12:47:13.377229  566789 ssh_runner.go:195] Run: rm -f paused
	I1019 12:47:13.381249  566789 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:47:13.381897  566789 kapi.go:59] client config for pause-513789: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/client.key", CAFile:"/home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 12:47:13.384623  566789 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7zzkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.389573  566789 pod_ready.go:94] pod "coredns-66bc5c9577-7zzkk" is "Ready"
	I1019 12:47:13.389597  566789 pod_ready.go:86] duration metric: took 4.949851ms for pod "coredns-66bc5c9577-7zzkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.391518  566789 pod_ready.go:83] waiting for pod "etcd-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.395600  566789 pod_ready.go:94] pod "etcd-pause-513789" is "Ready"
	I1019 12:47:13.395619  566789 pod_ready.go:86] duration metric: took 4.082836ms for pod "etcd-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.397737  566789 pod_ready.go:83] waiting for pod "kube-apiserver-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.402489  566789 pod_ready.go:94] pod "kube-apiserver-pause-513789" is "Ready"
	I1019 12:47:13.402511  566789 pod_ready.go:86] duration metric: took 4.751393ms for pod "kube-apiserver-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.404566  566789 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.785680  566789 pod_ready.go:94] pod "kube-controller-manager-pause-513789" is "Ready"
	I1019 12:47:13.785716  566789 pod_ready.go:86] duration metric: took 381.123711ms for pod "kube-controller-manager-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.985860  566789 pod_ready.go:83] waiting for pod "kube-proxy-nf888" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:14.385705  566789 pod_ready.go:94] pod "kube-proxy-nf888" is "Ready"
	I1019 12:47:14.385734  566789 pod_ready.go:86] duration metric: took 399.849542ms for pod "kube-proxy-nf888" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:14.585897  566789 pod_ready.go:83] waiting for pod "kube-scheduler-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:14.985959  566789 pod_ready.go:94] pod "kube-scheduler-pause-513789" is "Ready"
	I1019 12:47:14.985992  566789 pod_ready.go:86] duration metric: took 400.063804ms for pod "kube-scheduler-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:14.986008  566789 pod_ready.go:40] duration metric: took 1.604728835s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:47:15.031675  566789 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:47:15.033499  566789 out.go:179] * Done! kubectl is now configured to use "pause-513789" cluster and "default" namespace by default
	I1019 12:47:12.502429  567019 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 12:47:12.502454  567019 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-352361 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 12:47:12.549270  567019 cli_runner.go:164] Run: docker container inspect NoKubernetes-352361 --format={{.State.Status}}
	I1019 12:47:12.568093  567019 machine.go:93] provisionDockerMachine start ...
	I1019 12:47:12.568212  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:12.587897  567019 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:12.588244  567019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1019 12:47:12.588274  567019 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:47:12.588984  567019 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57968->127.0.0.1:33410: read: connection reset by peer
	I1019 12:47:15.722316  567019 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-352361
	
	I1019 12:47:15.722353  567019 ubuntu.go:182] provisioning hostname "NoKubernetes-352361"
	I1019 12:47:15.722411  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:15.740588  567019 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:15.740815  567019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1019 12:47:15.740828  567019 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-352361 && echo "NoKubernetes-352361" | sudo tee /etc/hostname
	I1019 12:47:15.887305  567019 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-352361
	
	I1019 12:47:15.887395  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:15.908261  567019 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:15.908505  567019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1019 12:47:15.908524  567019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-352361' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-352361/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-352361' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:47:16.045549  567019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:47:16.045589  567019 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:47:16.045637  567019 ubuntu.go:190] setting up certificates
	I1019 12:47:16.045672  567019 provision.go:84] configureAuth start
	I1019 12:47:16.045743  567019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-352361
	I1019 12:47:16.064335  567019 provision.go:143] copyHostCerts
	I1019 12:47:16.064371  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:47:16.064399  567019 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:47:16.064408  567019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:47:16.064517  567019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:47:16.064617  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:47:16.064637  567019 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:47:16.064655  567019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:47:16.064701  567019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:47:16.064758  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:47:16.064776  567019 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:47:16.064780  567019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:47:16.064814  567019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:47:16.064877  567019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-352361 san=[127.0.0.1 192.168.94.2 NoKubernetes-352361 localhost minikube]
	I1019 12:47:16.292981  567019 provision.go:177] copyRemoteCerts
	I1019 12:47:16.293040  567019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:47:16.293076  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:16.311370  567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa Username:docker}
	I1019 12:47:16.408089  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1019 12:47:16.408158  567019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:47:16.429066  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1019 12:47:16.429149  567019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 12:47:16.447703  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1019 12:47:16.447801  567019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:47:16.469778  567019 provision.go:87] duration metric: took 424.089418ms to configureAuth
	I1019 12:47:16.469810  567019 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:47:16.469996  567019 config.go:182] Loaded profile config "NoKubernetes-352361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:16.470110  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:16.488342  567019 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:16.488630  567019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1019 12:47:16.488657  567019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:47:16.735935  567019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:47:16.735960  567019 machine.go:96] duration metric: took 4.16784384s to provisionDockerMachine
	I1019 12:47:16.735973  567019 client.go:171] duration metric: took 9.0558178s to LocalClient.Create
	I1019 12:47:16.735998  567019 start.go:167] duration metric: took 9.055888318s to libmachine.API.Create "NoKubernetes-352361"
	I1019 12:47:16.736007  567019 start.go:293] postStartSetup for "NoKubernetes-352361" (driver="docker")
	I1019 12:47:16.736021  567019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:47:16.736079  567019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:47:16.736119  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:16.754587  567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa Username:docker}
	I1019 12:47:16.853578  567019 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:47:16.857194  567019 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:47:16.857221  567019 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:47:16.857232  567019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:47:16.857289  567019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:47:16.857357  567019 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:47:16.857367  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> /etc/ssl/certs/3552622.pem
	I1019 12:47:16.857466  567019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:47:16.865017  567019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:47:16.884810  567019 start.go:296] duration metric: took 148.78602ms for postStartSetup
	I1019 12:47:16.885193  567019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-352361
	I1019 12:47:16.903040  567019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/NoKubernetes-352361/config.json ...
	I1019 12:47:16.903275  567019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:47:16.903333  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:16.921188  567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa Username:docker}
	I1019 12:47:17.016236  567019 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:47:17.021128  567019 start.go:128] duration metric: took 9.343039619s to createHost
	I1019 12:47:17.021160  567019 start.go:83] releasing machines lock for "NoKubernetes-352361", held for 9.343173181s
	I1019 12:47:17.021235  567019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-352361
	I1019 12:47:17.039849  567019 ssh_runner.go:195] Run: cat /version.json
	I1019 12:47:17.039893  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:17.039927  567019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:47:17.040001  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:17.060356  567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa Username:docker}
	I1019 12:47:17.060580  567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa Username:docker}
	I1019 12:47:17.208234  567019 ssh_runner.go:195] Run: systemctl --version
	I1019 12:47:17.215181  567019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:47:17.250344  567019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:47:17.255213  567019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:47:17.255269  567019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:47:17.281978  567019 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 12:47:17.282001  567019 start.go:495] detecting cgroup driver to use...
	I1019 12:47:17.282029  567019 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:47:17.282074  567019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:47:17.298331  567019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:47:17.310636  567019 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:47:17.310702  567019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:47:17.331699  567019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:47:17.349218  567019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:47:17.440104  567019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	
	
	==> CRI-O <==
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.942442676Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.943319446Z" level=info msg="Conmon does support the --sync option"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.943337897Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.943356272Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.944132779Z" level=info msg="Conmon does support the --sync option"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.944149949Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.947800757Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.947818744Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.948271343Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.948634122Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.948685328Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.955143371Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.996162598Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-7zzkk Namespace:kube-system ID:0ae173fcca407acb8faab2f3ceab6f28241c08ea23f839f805341bd6656d1da1 UID:ef0f8e6f-65f2-4fde-8175-2b4225113317 NetNS:/var/run/netns/a2ec5cca-d77d-47b3-8268-d188d1986418 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000314708}] Aliases:map[]}"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.996380439Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-7zzkk for CNI network kindnet (type=ptp)"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.996909966Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.996934649Z" level=info msg="Starting seccomp notifier watcher"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.996987408Z" level=info msg="Create NRI interface"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997084481Z" level=info msg="built-in NRI default validator is disabled"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997101015Z" level=info msg="runtime interface created"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997114991Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997123485Z" level=info msg="runtime interface starting up..."
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997131766Z" level=info msg="starting plugins..."
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997144186Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997474026Z" level=info msg="No systemd watchdog enabled"
	Oct 19 12:47:11 pause-513789 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	1ec0fc3c0ba24       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   0ae173fcca407       coredns-66bc5c9577-7zzkk               kube-system
	14eb9e8442058       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   3a3e42903412b       kube-proxy-nf888                       kube-system
	920deebc214e2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   1a5629f06bc75       kindnet-ndk9h                          kube-system
	0528c1143dc03       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   36 seconds ago      Running             kube-controller-manager   0                   82d290076151f       kube-controller-manager-pause-513789   kube-system
	146dd0c10eabe       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   36 seconds ago      Running             kube-apiserver            0                   086c289434acb       kube-apiserver-pause-513789            kube-system
	31890535538bc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   36 seconds ago      Running             etcd                      0                   da42b8b5f87ea       etcd-pause-513789                      kube-system
	dcc8e66b624dd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago      Running             kube-scheduler            0                   a9fa1c5cde3a0       kube-scheduler-pause-513789            kube-system
	
	
	==> coredns [1ec0fc3c0ba24d3e699dd67f8810fed4621f9c513043af45479a9c5d807702ff] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55131 - 11986 "HINFO IN 9083236024844154690.5669056859740369869. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091341844s
	
	
	==> describe nodes <==
	Name:               pause-513789
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-513789
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=pause-513789
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_46_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:46:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-513789
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:47:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:47:04 +0000   Sun, 19 Oct 2025 12:46:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:47:04 +0000   Sun, 19 Oct 2025 12:46:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:47:04 +0000   Sun, 19 Oct 2025 12:46:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:47:04 +0000   Sun, 19 Oct 2025 12:47:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-513789
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                716b4031-f39f-49a2-9750-0f1bb7ecc1c1
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-7zzkk                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-513789                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-ndk9h                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-513789             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-513789    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-nf888                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-513789             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node pause-513789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node pause-513789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node pause-513789 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node pause-513789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node pause-513789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node pause-513789 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node pause-513789 event: Registered Node pause-513789 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-513789 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 31 d3 aa 8a bd 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 bc e1 50 25 8b 08 06
	[Oct19 12:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.045444] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023837] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +2.047737] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +8.512033] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[Oct19 12:09] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[ +32.252549] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	
	
	==> etcd [31890535538bc421b95363fbb2b2a58fc25aae9ac690403cf135ef78a607e96d] <==
	{"level":"warn","ts":"2025-10-19T12:46:44.052012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.062567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.071982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.084980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.094063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.102792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.118753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.126104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.138840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.144336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.154249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.163999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.172171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.181013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.188323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.196258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.204976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.211437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.219946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.234782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.242815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.249879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.258282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.327317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:47:11.640929Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.163864ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596480354682960 > lease_revoke:<id:06ed99fc820107d2>","response":"size:28"}
	
	
	==> kernel <==
	 12:47:18 up  2:29,  0 user,  load average: 3.73, 3.02, 1.95
	Linux pause-513789 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [920deebc214e2af14fcd54c5e9f2885245b1e6c033a03100dbc98aff69d1509a] <==
	I1019 12:46:53.412306       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:46:53.412589       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 12:46:53.412762       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:46:53.412779       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:46:53.412796       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:46:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:46:53.706686       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:46:53.706712       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:46:53.706730       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:46:53.709364       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:46:54.006849       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:46:54.006876       1 metrics.go:72] Registering metrics
	I1019 12:46:54.006954       1 controller.go:711] "Syncing nftables rules"
	I1019 12:47:03.708493       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:47:03.708550       1 main.go:301] handling current node
	I1019 12:47:13.713791       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:47:13.713822       1 main.go:301] handling current node
	
	
	==> kube-apiserver [146dd0c10eabe2f6580ce9036e41fb648b6c4762abbd294a65c9313f61ee9197] <==
	I1019 12:46:44.843315       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 12:46:44.843618       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 12:46:44.844984       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:46:44.849294       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:46:44.849411       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 12:46:44.857261       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:46:44.858162       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:46:45.040128       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:46:45.747332       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 12:46:45.751014       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 12:46:45.751033       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:46:46.215924       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:46:46.252862       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:46:46.353235       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 12:46:46.358789       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 12:46:46.359940       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:46:46.363875       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:46:46.805938       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:46:47.362334       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:46:47.372330       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 12:46:47.381654       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 12:46:52.510203       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:46:52.514687       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:46:52.807829       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:46:52.857395       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0528c1143dc0359455551f53f87509d4b20895517dfd2e448eeb029e9f2cbd59] <==
	I1019 12:46:51.764684       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 12:46:51.765845       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:46:51.804797       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 12:46:51.804829       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 12:46:51.804933       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 12:46:51.806162       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 12:46:51.806249       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 12:46:51.806260       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 12:46:51.806249       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 12:46:51.806324       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 12:46:51.806325       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 12:46:51.806436       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:46:51.806727       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 12:46:51.806857       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 12:46:51.809737       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 12:46:51.809768       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 12:46:51.809827       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 12:46:51.809878       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 12:46:51.809888       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 12:46:51.809894       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 12:46:51.810970       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:46:51.816118       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 12:46:51.817335       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-513789" podCIDRs=["10.244.0.0/24"]
	I1019 12:46:51.830901       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:47:06.758827       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [14eb9e844205895def56603a546e32a8ab831cb3660f127a0c21e7ebbe546d9d] <==
	I1019 12:46:53.262281       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:46:53.315498       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:46:53.415849       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:46:53.415914       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 12:46:53.416011       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:46:53.438518       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:46:53.438584       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:46:53.445824       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:46:53.446227       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:46:53.446250       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:46:53.447954       1 config.go:200] "Starting service config controller"
	I1019 12:46:53.448023       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:46:53.447974       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:46:53.447987       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:46:53.448018       1 config.go:309] "Starting node config controller"
	I1019 12:46:53.448062       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:46:53.448064       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:46:53.448055       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:46:53.448067       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:46:53.548488       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:46:53.548485       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:46:53.548536       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [dcc8e66b624ddc6e3f31455e07e4c4d89d8ed87a1176cf3098ab8c6a1a62bb01] <==
	E1019 12:46:44.806580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:46:44.806630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:46:44.806912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:46:44.806937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:46:44.806957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:46:44.806989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:46:44.806992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:46:44.807040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:46:44.807090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:46:44.807094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:46:44.807230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:46:44.807235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:46:44.807377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 12:46:44.807376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:46:45.616597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:46:45.698869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:46:45.744399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:46:45.757612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:46:45.793057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:46:45.882544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:46:45.944116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:46:45.988108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:46:46.004660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:46:46.072772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1019 12:46:49.203944       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:46:54 pause-513789 kubelet[1320]: I1019 12:46:54.250068    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nf888" podStartSLOduration=2.250063543 podStartE2EDuration="2.250063543s" podCreationTimestamp="2025-10-19 12:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:46:54.240467081 +0000 UTC m=+7.132736001" watchObservedRunningTime="2025-10-19 12:46:54.250063543 +0000 UTC m=+7.142332443"
	Oct 19 12:47:04 pause-513789 kubelet[1320]: I1019 12:47:04.083958    1320 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 12:47:04 pause-513789 kubelet[1320]: I1019 12:47:04.212950    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef0f8e6f-65f2-4fde-8175-2b4225113317-config-volume\") pod \"coredns-66bc5c9577-7zzkk\" (UID: \"ef0f8e6f-65f2-4fde-8175-2b4225113317\") " pod="kube-system/coredns-66bc5c9577-7zzkk"
	Oct 19 12:47:04 pause-513789 kubelet[1320]: I1019 12:47:04.212995    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48tr9\" (UniqueName: \"kubernetes.io/projected/ef0f8e6f-65f2-4fde-8175-2b4225113317-kube-api-access-48tr9\") pod \"coredns-66bc5c9577-7zzkk\" (UID: \"ef0f8e6f-65f2-4fde-8175-2b4225113317\") " pod="kube-system/coredns-66bc5c9577-7zzkk"
	Oct 19 12:47:05 pause-513789 kubelet[1320]: I1019 12:47:05.280359    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7zzkk" podStartSLOduration=12.280334008 podStartE2EDuration="12.280334008s" podCreationTimestamp="2025-10-19 12:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:47:05.268467155 +0000 UTC m=+18.160736061" watchObservedRunningTime="2025-10-19 12:47:05.280334008 +0000 UTC m=+18.172602908"
	Oct 19 12:47:09 pause-513789 kubelet[1320]: W1019 12:47:09.199908    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.200017    1320 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.200122    1320 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.200146    1320 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.200166    1320 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.265406    1320 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.265480    1320 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.265494    1320 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:09 pause-513789 kubelet[1320]: W1019 12:47:09.300710    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 19 12:47:09 pause-513789 kubelet[1320]: W1019 12:47:09.481955    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 19 12:47:09 pause-513789 kubelet[1320]: W1019 12:47:09.772618    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 19 12:47:10 pause-513789 kubelet[1320]: W1019 12:47:10.177925    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 19 12:47:10 pause-513789 kubelet[1320]: E1019 12:47:10.265894    1320 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 19 12:47:10 pause-513789 kubelet[1320]: E1019 12:47:10.265972    1320 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:10 pause-513789 kubelet[1320]: E1019 12:47:10.265999    1320 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:12 pause-513789 kubelet[1320]: E1019 12:47:12.217269    1320 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"
	Oct 19 12:47:15 pause-513789 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 12:47:15 pause-513789 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 12:47:15 pause-513789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 12:47:15 pause-513789 systemd[1]: kubelet.service: Consumed 1.199s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-513789 -n pause-513789
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-513789 -n pause-513789: exit status 2 (318.92677ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-513789 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-513789
helpers_test.go:243: (dbg) docker inspect pause-513789:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc",
	        "Created": "2025-10-19T12:46:31.27526377Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 560120,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:46:31.314726475Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc/hosts",
	        "LogPath": "/var/lib/docker/containers/7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc/7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc-json.log",
	        "Name": "/pause-513789",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-513789:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-513789",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7b2509a4aec9b9d40247b89ba21c70a089e26382e259477cab3b4c899101bcbc",
	                "LowerDir": "/var/lib/docker/overlay2/58381575d7c83c49728f3369fc7321b73c775694570e55f8d5f099b2f182e349-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/58381575d7c83c49728f3369fc7321b73c775694570e55f8d5f099b2f182e349/merged",
	                "UpperDir": "/var/lib/docker/overlay2/58381575d7c83c49728f3369fc7321b73c775694570e55f8d5f099b2f182e349/diff",
	                "WorkDir": "/var/lib/docker/overlay2/58381575d7c83c49728f3369fc7321b73c775694570e55f8d5f099b2f182e349/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-513789",
	                "Source": "/var/lib/docker/volumes/pause-513789/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-513789",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-513789",
	                "name.minikube.sigs.k8s.io": "pause-513789",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0510ac92a2fc2782a6f5954270692bdf4a2b9e635c12be17558ecd4f3306ab22",
	            "SandboxKey": "/var/run/docker/netns/0510ac92a2fc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-513789": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:5e:90:d4:37:ff",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5bcb1162b0f864bd39e0ae8f3ebf42dd06eacb92bce754ec3ed5c0330e43511e",
	                    "EndpointID": "76820c09cacde9767d087d622dbfc1e8176aa1e2bada325b06f72216381c38b8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-513789",
	                        "7b2509a4aec9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-513789 -n pause-513789
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-513789 -n pause-513789: exit status 2 (319.292542ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-513789 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-931932 sudo systemctl cat cri-docker --no-pager                                                     │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo cat /usr/lib/systemd/system/cri-docker.service                                          │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo cri-dockerd --version                                                                   │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo systemctl status containerd --all --full --no-pager                                     │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo systemctl cat containerd --no-pager                                                     │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo cat /lib/systemd/system/containerd.service                                              │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo cat /etc/containerd/config.toml                                                         │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo containerd config dump                                                                  │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo systemctl status crio --all --full --no-pager                                           │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo systemctl cat crio --no-pager                                                           │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                 │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ ssh     │ -p cilium-931932 sudo crio config                                                                             │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │                     │
	│ delete  │ -p cilium-931932                                                                                              │ cilium-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:46 UTC │
	│ start   │ -p running-upgrade-188277 --memory=3072 --vm-driver=docker  --container-runtime=crio                          │ running-upgrade-188277 │ jenkins │ v1.32.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:46 UTC │
	│ ssh     │ cert-options-868990 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                   │ cert-options-868990    │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:46 UTC │
	│ ssh     │ -p cert-options-868990 -- sudo cat /etc/kubernetes/admin.conf                                                 │ cert-options-868990    │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:46 UTC │
	│ delete  │ -p cert-options-868990                                                                                        │ cert-options-868990    │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:46 UTC │
	│ start   │ -p pause-513789 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio     │ pause-513789           │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:47 UTC │
	│ start   │ -p running-upgrade-188277 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio      │ running-upgrade-188277 │ jenkins │ v1.37.0 │ 19 Oct 25 12:46 UTC │ 19 Oct 25 12:47 UTC │
	│ delete  │ -p running-upgrade-188277                                                                                     │ running-upgrade-188277 │ jenkins │ v1.37.0 │ 19 Oct 25 12:47 UTC │ 19 Oct 25 12:47 UTC │
	│ start   │ -p pause-513789 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                              │ pause-513789           │ jenkins │ v1.37.0 │ 19 Oct 25 12:47 UTC │ 19 Oct 25 12:47 UTC │
	│ start   │ -p NoKubernetes-352361 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio │ NoKubernetes-352361    │ jenkins │ v1.37.0 │ 19 Oct 25 12:47 UTC │                     │
	│ start   │ -p NoKubernetes-352361 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio         │ NoKubernetes-352361    │ jenkins │ v1.37.0 │ 19 Oct 25 12:47 UTC │                     │
	│ pause   │ -p pause-513789 --alsologtostderr -v=5                                                                        │ pause-513789           │ jenkins │ v1.37.0 │ 19 Oct 25 12:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:47:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:47:07.484915  567019 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:47:07.485226  567019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:47:07.485237  567019 out.go:374] Setting ErrFile to fd 2...
	I1019 12:47:07.485242  567019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:47:07.485414  567019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:47:07.485850  567019 out.go:368] Setting JSON to false
	I1019 12:47:07.487028  567019 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8975,"bootTime":1760869052,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:47:07.487124  567019 start.go:141] virtualization: kvm guest
	I1019 12:47:07.489668  567019 out.go:179] * [NoKubernetes-352361] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:47:07.491101  567019 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:47:07.491138  567019 notify.go:220] Checking for updates...
	I1019 12:47:07.493630  567019 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:47:07.495523  567019 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:47:07.496593  567019 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:47:07.497693  567019 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:47:07.498821  567019 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:47:07.500619  567019 config.go:182] Loaded profile config "cert-expiration-599351": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:07.500759  567019 config.go:182] Loaded profile config "kubernetes-upgrade-566686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:07.500931  567019 config.go:182] Loaded profile config "pause-513789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:07.501050  567019 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:47:07.525532  567019 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:47:07.525675  567019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:47:07.584951  567019 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 12:47:07.57298267 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:47:07.585062  567019 docker.go:318] overlay module found
	I1019 12:47:07.586558  567019 out.go:179] * Using the docker driver based on user configuration
	I1019 12:47:07.587609  567019 start.go:305] selected driver: docker
	I1019 12:47:07.587625  567019 start.go:925] validating driver "docker" against <nil>
	I1019 12:47:07.587637  567019 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:47:07.588202  567019 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:47:07.647506  567019 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 12:47:07.636405307 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:47:07.647708  567019 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:47:07.647914  567019 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 12:47:07.649258  567019 out.go:179] * Using Docker driver with root privileges
	I1019 12:47:07.650230  567019 cni.go:84] Creating CNI manager for ""
	I1019 12:47:07.650307  567019 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:47:07.650325  567019 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:47:07.650406  567019 start.go:349] cluster config:
	{Name:NoKubernetes-352361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:NoKubernetes-352361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:47:07.651780  567019 out.go:179] * Starting "NoKubernetes-352361" primary control-plane node in "NoKubernetes-352361" cluster
	I1019 12:47:07.653083  567019 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:47:07.654549  567019 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:47:07.655686  567019 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:47:07.655807  567019 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:47:07.655836  567019 cache.go:58] Caching tarball of preloaded images
	I1019 12:47:07.655729  567019 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:47:07.655968  567019 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:47:07.655983  567019 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:47:07.656132  567019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/NoKubernetes-352361/config.json ...
	I1019 12:47:07.656158  567019 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/NoKubernetes-352361/config.json: {Name:mk0b5a2ed7872728a1688c82c2fcbe2b071deb74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:47:07.677788  567019 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:47:07.677816  567019 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:47:07.677836  567019 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:47:07.677867  567019 start.go:360] acquireMachinesLock for NoKubernetes-352361: {Name:mkcfbda9f21f0534f21846ed1fab72e95ee68b31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:47:07.677973  567019 start.go:364] duration metric: took 86.06µs to acquireMachinesLock for "NoKubernetes-352361"
	I1019 12:47:07.678003  567019 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-352361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:NoKubernetes-352361 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:47:07.678071  567019 start.go:125] createHost starting for "" (driver="docker")
	I1019 12:47:05.924501  534438 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:47:05.925021  534438 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 12:47:05.925086  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 12:47:05.925134  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 12:47:05.951922  534438 cri.go:89] found id: "c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9"
	I1019 12:47:05.951940  534438 cri.go:89] found id: ""
	I1019 12:47:05.951961  534438 logs.go:282] 1 containers: [c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9]
	I1019 12:47:05.952010  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:05.956095  534438 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 12:47:05.956172  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 12:47:05.983137  534438 cri.go:89] found id: ""
	I1019 12:47:05.983168  534438 logs.go:282] 0 containers: []
	W1019 12:47:05.983179  534438 logs.go:284] No container was found matching "etcd"
	I1019 12:47:05.983188  534438 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 12:47:05.983252  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 12:47:06.010380  534438 cri.go:89] found id: ""
	I1019 12:47:06.010408  534438 logs.go:282] 0 containers: []
	W1019 12:47:06.010431  534438 logs.go:284] No container was found matching "coredns"
	I1019 12:47:06.010441  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 12:47:06.010507  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 12:47:06.037530  534438 cri.go:89] found id: "f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa"
	I1019 12:47:06.037555  534438 cri.go:89] found id: ""
	I1019 12:47:06.037566  534438 logs.go:282] 1 containers: [f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa]
	I1019 12:47:06.037639  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:06.041918  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 12:47:06.041992  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 12:47:06.070033  534438 cri.go:89] found id: ""
	I1019 12:47:06.070069  534438 logs.go:282] 0 containers: []
	W1019 12:47:06.070080  534438 logs.go:284] No container was found matching "kube-proxy"
	I1019 12:47:06.070088  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 12:47:06.070137  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 12:47:06.096479  534438 cri.go:89] found id: "4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e"
	I1019 12:47:06.096501  534438 cri.go:89] found id: ""
	I1019 12:47:06.096509  534438 logs.go:282] 1 containers: [4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e]
	I1019 12:47:06.096556  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:06.100487  534438 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 12:47:06.100569  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 12:47:06.127501  534438 cri.go:89] found id: ""
	I1019 12:47:06.127526  534438 logs.go:282] 0 containers: []
	W1019 12:47:06.127534  534438 logs.go:284] No container was found matching "kindnet"
	I1019 12:47:06.127542  534438 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 12:47:06.127600  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 12:47:06.155139  534438 cri.go:89] found id: ""
	I1019 12:47:06.155175  534438 logs.go:282] 0 containers: []
	W1019 12:47:06.155186  534438 logs.go:284] No container was found matching "storage-provisioner"
	I1019 12:47:06.155198  534438 logs.go:123] Gathering logs for CRI-O ...
	I1019 12:47:06.155214  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 12:47:06.199723  534438 logs.go:123] Gathering logs for container status ...
	I1019 12:47:06.199759  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 12:47:06.230366  534438 logs.go:123] Gathering logs for kubelet ...
	I1019 12:47:06.230403  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 12:47:06.306133  534438 logs.go:123] Gathering logs for dmesg ...
	I1019 12:47:06.306170  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 12:47:06.323555  534438 logs.go:123] Gathering logs for describe nodes ...
	I1019 12:47:06.323586  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 12:47:06.383512  534438 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 12:47:06.383550  534438 logs.go:123] Gathering logs for kube-apiserver [c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9] ...
	I1019 12:47:06.383568  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9"
	I1019 12:47:06.419804  534438 logs.go:123] Gathering logs for kube-scheduler [f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa] ...
	I1019 12:47:06.419838  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa"
	I1019 12:47:06.471511  534438 logs.go:123] Gathering logs for kube-controller-manager [4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e] ...
	I1019 12:47:06.471544  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e"
	I1019 12:47:09.002502  534438 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:47:09.003005  534438 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 12:47:09.003075  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 12:47:09.003134  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 12:47:09.033936  534438 cri.go:89] found id: "c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9"
	I1019 12:47:09.033962  534438 cri.go:89] found id: ""
	I1019 12:47:09.033974  534438 logs.go:282] 1 containers: [c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9]
	I1019 12:47:09.034038  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:09.038268  534438 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 12:47:09.038346  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 12:47:09.069656  534438 cri.go:89] found id: ""
	I1019 12:47:09.069687  534438 logs.go:282] 0 containers: []
	W1019 12:47:09.069698  534438 logs.go:284] No container was found matching "etcd"
	I1019 12:47:09.069707  534438 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 12:47:09.069768  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 12:47:09.100969  534438 cri.go:89] found id: ""
	I1019 12:47:09.100994  534438 logs.go:282] 0 containers: []
	W1019 12:47:09.101002  534438 logs.go:284] No container was found matching "coredns"
	I1019 12:47:09.101008  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 12:47:09.101065  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 12:47:09.129927  534438 cri.go:89] found id: "f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa"
	I1019 12:47:09.129955  534438 cri.go:89] found id: ""
	I1019 12:47:09.129966  534438 logs.go:282] 1 containers: [f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa]
	I1019 12:47:09.130030  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:09.134502  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 12:47:09.134570  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 12:47:09.165215  534438 cri.go:89] found id: ""
	I1019 12:47:09.165245  534438 logs.go:282] 0 containers: []
	W1019 12:47:09.165257  534438 logs.go:284] No container was found matching "kube-proxy"
	I1019 12:47:09.165265  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 12:47:09.165339  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 12:47:09.197250  534438 cri.go:89] found id: "4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e"
	I1019 12:47:09.197270  534438 cri.go:89] found id: ""
	I1019 12:47:09.197278  534438 logs.go:282] 1 containers: [4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e]
	I1019 12:47:09.197326  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:09.201879  534438 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 12:47:09.201949  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 12:47:09.228702  534438 cri.go:89] found id: ""
	I1019 12:47:09.228731  534438 logs.go:282] 0 containers: []
	W1019 12:47:09.228739  534438 logs.go:284] No container was found matching "kindnet"
	I1019 12:47:09.228745  534438 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 12:47:09.228806  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 12:47:09.258347  534438 cri.go:89] found id: ""
	I1019 12:47:09.258379  534438 logs.go:282] 0 containers: []
	W1019 12:47:09.258390  534438 logs.go:284] No container was found matching "storage-provisioner"
	I1019 12:47:09.258403  534438 logs.go:123] Gathering logs for container status ...
	I1019 12:47:09.258417  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 12:47:09.290203  534438 logs.go:123] Gathering logs for kubelet ...
	I1019 12:47:09.290241  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 12:47:09.369505  534438 logs.go:123] Gathering logs for dmesg ...
	I1019 12:47:09.369539  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 12:47:09.387462  534438 logs.go:123] Gathering logs for describe nodes ...
	I1019 12:47:09.387499  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 12:47:09.446836  534438 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 12:47:09.446870  534438 logs.go:123] Gathering logs for kube-apiserver [c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9] ...
	I1019 12:47:09.446896  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9"
	I1019 12:47:09.482628  534438 logs.go:123] Gathering logs for kube-scheduler [f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa] ...
	I1019 12:47:09.482663  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa"
	I1019 12:47:09.532862  534438 logs.go:123] Gathering logs for kube-controller-manager [4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e] ...
	I1019 12:47:09.532899  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e"
	I1019 12:47:09.560404  534438 logs.go:123] Gathering logs for CRI-O ...
	I1019 12:47:09.560467  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 12:47:07.414386  566789 out.go:252] * Updating the running docker "pause-513789" container ...
	I1019 12:47:07.414446  566789 machine.go:93] provisionDockerMachine start ...
	I1019 12:47:07.414523  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:07.434363  566789 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:07.434712  566789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1019 12:47:07.434737  566789 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:47:07.574984  566789 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-513789
	
	I1019 12:47:07.575015  566789 ubuntu.go:182] provisioning hostname "pause-513789"
	I1019 12:47:07.575090  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:07.595810  566789 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:07.596100  566789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1019 12:47:07.596120  566789 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-513789 && echo "pause-513789" | sudo tee /etc/hostname
	I1019 12:47:07.747222  566789 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-513789
	
	I1019 12:47:07.747310  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:07.769233  566789 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:07.769578  566789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1019 12:47:07.769610  566789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-513789' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-513789/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-513789' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:47:07.908184  566789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:47:07.908218  566789 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:47:07.908267  566789 ubuntu.go:190] setting up certificates
	I1019 12:47:07.908282  566789 provision.go:84] configureAuth start
	I1019 12:47:07.908344  566789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-513789
	I1019 12:47:07.927117  566789 provision.go:143] copyHostCerts
	I1019 12:47:07.927194  566789 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:47:07.927218  566789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:47:07.927297  566789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:47:07.927447  566789 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:47:07.927463  566789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:47:07.927512  566789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:47:07.927632  566789 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:47:07.927646  566789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:47:07.927689  566789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:47:07.927804  566789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.pause-513789 san=[127.0.0.1 192.168.85.2 localhost minikube pause-513789]
	I1019 12:47:08.376872  566789 provision.go:177] copyRemoteCerts
	I1019 12:47:08.376941  566789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:47:08.377008  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:08.402285  566789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/pause-513789/id_rsa Username:docker}
	I1019 12:47:08.502000  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:47:08.520851  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 12:47:08.539444  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 12:47:08.557879  566789 provision.go:87] duration metric: took 649.574789ms to configureAuth
	I1019 12:47:08.557916  566789 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:47:08.558109  566789 config.go:182] Loaded profile config "pause-513789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:08.558207  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:08.576409  566789 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:08.576680  566789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1019 12:47:08.576701  566789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:47:10.261815  566789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:47:10.261854  566789 machine.go:96] duration metric: took 2.847398039s to provisionDockerMachine
	I1019 12:47:10.261870  566789 start.go:293] postStartSetup for "pause-513789" (driver="docker")
	I1019 12:47:10.261884  566789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:47:10.261953  566789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:47:10.261991  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:10.282120  566789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/pause-513789/id_rsa Username:docker}
	I1019 12:47:10.380332  566789 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:47:10.383990  566789 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:47:10.384013  566789 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:47:10.384023  566789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:47:10.384069  566789 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:47:10.384136  566789 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:47:10.384230  566789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:47:10.392056  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:47:10.410746  566789 start.go:296] duration metric: took 148.855428ms for postStartSetup
	I1019 12:47:10.410845  566789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:47:10.410925  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:10.429328  566789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/pause-513789/id_rsa Username:docker}
	I1019 12:47:10.526008  566789 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:47:10.530892  566789 fix.go:56] duration metric: took 3.1381217s for fixHost
	I1019 12:47:10.530920  566789 start.go:83] releasing machines lock for "pause-513789", held for 3.138165877s
	I1019 12:47:10.530995  566789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-513789
	I1019 12:47:10.549040  566789 ssh_runner.go:195] Run: cat /version.json
	I1019 12:47:10.549082  566789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:47:10.549098  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:10.549152  566789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-513789
	I1019 12:47:10.568410  566789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/pause-513789/id_rsa Username:docker}
	I1019 12:47:10.569732  566789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/pause-513789/id_rsa Username:docker}
	I1019 12:47:10.661892  566789 ssh_runner.go:195] Run: systemctl --version
	I1019 12:47:10.721662  566789 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:47:10.760838  566789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:47:10.766051  566789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:47:10.766129  566789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:47:10.774906  566789 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:47:10.774930  566789 start.go:495] detecting cgroup driver to use...
	I1019 12:47:10.774960  566789 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:47:10.775002  566789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:47:10.792972  566789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:47:10.807784  566789 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:47:10.807842  566789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:47:10.826570  566789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:47:10.840818  566789 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:47:10.970500  566789 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:47:11.085609  566789 docker.go:234] disabling docker service ...
	I1019 12:47:11.085712  566789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:47:11.100354  566789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:47:11.113435  566789 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:47:11.226108  566789 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:47:11.336125  566789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:47:11.349166  566789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:47:11.369639  566789 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:47:11.369711  566789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.389691  566789 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:47:11.389763  566789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.415070  566789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.506658  566789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.636678  566789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:47:11.645714  566789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.654789  566789 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.663512  566789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:11.702762  566789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:47:11.711000  566789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:47:11.719065  566789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:47:11.844451  566789 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:47:12.001172  566789 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:47:12.001248  566789 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:47:12.005837  566789 start.go:563] Will wait 60s for crictl version
	I1019 12:47:12.005906  566789 ssh_runner.go:195] Run: which crictl
	I1019 12:47:12.010117  566789 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:47:12.036191  566789 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:47:12.036275  566789 ssh_runner.go:195] Run: crio --version
	I1019 12:47:12.067753  566789 ssh_runner.go:195] Run: crio --version
	I1019 12:47:12.105233  566789 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:47:12.107543  566789 cli_runner.go:164] Run: docker network inspect pause-513789 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:47:12.129986  566789 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 12:47:12.135113  566789 kubeadm.go:883] updating cluster {Name:pause-513789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-513789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:47:12.135382  566789 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:47:12.135558  566789 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:47:12.172632  566789 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:47:12.172663  566789 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:47:12.172732  566789 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:47:07.679856  567019 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 12:47:07.680109  567019 start.go:159] libmachine.API.Create for "NoKubernetes-352361" (driver="docker")
	I1019 12:47:07.680147  567019 client.go:168] LocalClient.Create starting
	I1019 12:47:07.680217  567019 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem
	I1019 12:47:07.680261  567019 main.go:141] libmachine: Decoding PEM data...
	I1019 12:47:07.680285  567019 main.go:141] libmachine: Parsing certificate...
	I1019 12:47:07.680364  567019 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem
	I1019 12:47:07.680396  567019 main.go:141] libmachine: Decoding PEM data...
	I1019 12:47:07.680464  567019 main.go:141] libmachine: Parsing certificate...
	I1019 12:47:07.680812  567019 cli_runner.go:164] Run: docker network inspect NoKubernetes-352361 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:47:07.698385  567019 cli_runner.go:211] docker network inspect NoKubernetes-352361 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:47:07.698476  567019 network_create.go:284] running [docker network inspect NoKubernetes-352361] to gather additional debugging logs...
	I1019 12:47:07.698503  567019 cli_runner.go:164] Run: docker network inspect NoKubernetes-352361
	W1019 12:47:07.716119  567019 cli_runner.go:211] docker network inspect NoKubernetes-352361 returned with exit code 1
	I1019 12:47:07.716148  567019 network_create.go:287] error running [docker network inspect NoKubernetes-352361]: docker network inspect NoKubernetes-352361: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-352361 not found
	I1019 12:47:07.716171  567019 network_create.go:289] output of [docker network inspect NoKubernetes-352361]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-352361 not found
	
	** /stderr **
	I1019 12:47:07.716311  567019 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:47:07.734598  567019 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a4629926c406 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:8c:3f:62:13:f6} reservation:<nil>}
	I1019 12:47:07.735404  567019 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6cccd776798e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:1b:39:ab:6e:7b} reservation:<nil>}
	I1019 12:47:07.735903  567019 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-91914a6ce07e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:1c:aa:a8:a4:4a} reservation:<nil>}
	I1019 12:47:07.736488  567019 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7bfed117f373 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6e:eb:6e:51:bc:90} reservation:<nil>}
	I1019 12:47:07.737236  567019 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5bcb1162b0f8 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:46:10:30:a3:e7:95} reservation:<nil>}
	I1019 12:47:07.738123  567019 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f8aca0}
	I1019 12:47:07.738149  567019 network_create.go:124] attempt to create docker network NoKubernetes-352361 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1019 12:47:07.738190  567019 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-352361 NoKubernetes-352361
	I1019 12:47:07.802676  567019 network_create.go:108] docker network NoKubernetes-352361 192.168.94.0/24 created
	I1019 12:47:07.802704  567019 kic.go:121] calculated static IP "192.168.94.2" for the "NoKubernetes-352361" container
	I1019 12:47:07.802773  567019 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:47:07.823795  567019 cli_runner.go:164] Run: docker volume create NoKubernetes-352361 --label name.minikube.sigs.k8s.io=NoKubernetes-352361 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:47:07.844297  567019 oci.go:103] Successfully created a docker volume NoKubernetes-352361
	I1019 12:47:07.844378  567019 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-352361-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-352361 --entrypoint /usr/bin/test -v NoKubernetes-352361:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:47:08.262315  567019 oci.go:107] Successfully prepared a docker volume NoKubernetes-352361
	I1019 12:47:08.262362  567019 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:47:08.262382  567019 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:47:08.262459  567019 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v NoKubernetes-352361:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 12:47:11.724605  567019 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v NoKubernetes-352361:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.462099757s)
	I1019 12:47:11.724638  567019 kic.go:203] duration metric: took 3.46225224s to extract preloaded images to volume ...
	W1019 12:47:11.724737  567019 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 12:47:11.724779  567019 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 12:47:11.724831  567019 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 12:47:11.796510  567019 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-352361 --name NoKubernetes-352361 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-352361 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-352361 --network NoKubernetes-352361 --ip 192.168.94.2 --volume NoKubernetes-352361:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 12:47:12.075884  567019 cli_runner.go:164] Run: docker container inspect NoKubernetes-352361 --format={{.State.Running}}
	I1019 12:47:12.096039  567019 cli_runner.go:164] Run: docker container inspect NoKubernetes-352361 --format={{.State.Status}}
	I1019 12:47:12.117338  567019 cli_runner.go:164] Run: docker exec NoKubernetes-352361 stat /var/lib/dpkg/alternatives/iptables
	I1019 12:47:12.170507  567019 oci.go:144] the created container "NoKubernetes-352361" has a running status.
	I1019 12:47:12.170549  567019 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa...
	I1019 12:47:12.456981  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1019 12:47:12.457038  567019 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 12:47:12.482401  567019 cli_runner.go:164] Run: docker container inspect NoKubernetes-352361 --format={{.State.Status}}
	I1019 12:47:12.203834  566789 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:47:12.203864  566789 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:47:12.203875  566789 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1019 12:47:12.204074  566789 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-513789 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-513789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:47:12.204166  566789 ssh_runner.go:195] Run: crio config
	I1019 12:47:12.274357  566789 cni.go:84] Creating CNI manager for ""
	I1019 12:47:12.274385  566789 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:47:12.274404  566789 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:47:12.274539  566789 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-513789 NodeName:pause-513789 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:47:12.274731  566789 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-513789"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:47:12.274831  566789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:47:12.284815  566789 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:47:12.284901  566789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:47:12.295307  566789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1019 12:47:12.313190  566789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:47:12.335687  566789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1019 12:47:12.350616  566789 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:47:12.355596  566789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:47:12.479050  566789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:47:12.495415  566789 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789 for IP: 192.168.85.2
	I1019 12:47:12.495457  566789 certs.go:195] generating shared ca certs ...
	I1019 12:47:12.495480  566789 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:47:12.495650  566789 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:47:12.495740  566789 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:47:12.495761  566789 certs.go:257] generating profile certs ...
	I1019 12:47:12.495868  566789 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/client.key
	I1019 12:47:12.495945  566789 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/apiserver.key.18d09e63
	I1019 12:47:12.495993  566789 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/proxy-client.key
	I1019 12:47:12.496122  566789 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:47:12.496165  566789 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:47:12.496181  566789 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:47:12.496211  566789 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:47:12.496244  566789 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:47:12.496270  566789 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:47:12.496320  566789 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:47:12.497223  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:47:12.520269  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:47:12.541028  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:47:12.562203  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:47:12.581210  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 12:47:12.600406  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:47:12.618473  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:47:12.636034  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:47:12.654005  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:47:12.671893  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:47:12.690891  566789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:47:12.709109  566789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:47:12.722449  566789 ssh_runner.go:195] Run: openssl version
	I1019 12:47:12.728729  566789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:47:12.738028  566789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:47:12.742543  566789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:47:12.742618  566789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:47:12.778375  566789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:47:12.786735  566789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:47:12.795392  566789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:47:12.799063  566789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:47:12.799116  566789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:47:12.835223  566789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:47:12.843760  566789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:47:12.852455  566789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:47:12.856175  566789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:47:12.856232  566789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:47:12.890079  566789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:47:12.898515  566789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:47:12.902398  566789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:47:12.936853  566789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:47:12.973228  566789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:47:13.009486  566789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:47:13.045931  566789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:47:13.082906  566789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:47:13.118070  566789 kubeadm.go:400] StartCluster: {Name:pause-513789 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-513789 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:47:13.118191  566789 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:47:13.118255  566789 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:47:13.146189  566789 cri.go:89] found id: "1ec0fc3c0ba24d3e699dd67f8810fed4621f9c513043af45479a9c5d807702ff"
	I1019 12:47:13.146210  566789 cri.go:89] found id: "14eb9e844205895def56603a546e32a8ab831cb3660f127a0c21e7ebbe546d9d"
	I1019 12:47:13.146214  566789 cri.go:89] found id: "920deebc214e2af14fcd54c5e9f2885245b1e6c033a03100dbc98aff69d1509a"
	I1019 12:47:13.146217  566789 cri.go:89] found id: "0528c1143dc0359455551f53f87509d4b20895517dfd2e448eeb029e9f2cbd59"
	I1019 12:47:13.146219  566789 cri.go:89] found id: "146dd0c10eabe2f6580ce9036e41fb648b6c4762abbd294a65c9313f61ee9197"
	I1019 12:47:13.146222  566789 cri.go:89] found id: "31890535538bc421b95363fbb2b2a58fc25aae9ac690403cf135ef78a607e96d"
	I1019 12:47:13.146224  566789 cri.go:89] found id: "dcc8e66b624ddc6e3f31455e07e4c4d89d8ed87a1176cf3098ab8c6a1a62bb01"
	I1019 12:47:13.146226  566789 cri.go:89] found id: ""
	I1019 12:47:13.146264  566789 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:47:13.157896  566789 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:47:13Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:47:13.157981  566789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:47:13.165644  566789 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:47:13.165663  566789 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:47:13.165700  566789 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:47:13.173040  566789 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:47:13.173786  566789 kubeconfig.go:125] found "pause-513789" server: "https://192.168.85.2:8443"
	I1019 12:47:13.174669  566789 kapi.go:59] client config for pause-513789: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/client.key", CAFile:"/home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 12:47:13.175076  566789 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1019 12:47:13.175090  566789 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1019 12:47:13.175095  566789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1019 12:47:13.175099  566789 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1019 12:47:13.175103  566789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1019 12:47:13.175449  566789 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:47:13.182686  566789 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 12:47:13.182719  566789 kubeadm.go:601] duration metric: took 17.05007ms to restartPrimaryControlPlane
	I1019 12:47:13.182731  566789 kubeadm.go:402] duration metric: took 64.677075ms to StartCluster
	I1019 12:47:13.182749  566789 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:47:13.182809  566789 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:47:13.183683  566789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:47:13.183892  566789 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:47:13.183956  566789 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:47:13.184106  566789 config.go:182] Loaded profile config "pause-513789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:13.186774  566789 out.go:179] * Verifying Kubernetes components...
	I1019 12:47:13.186779  566789 out.go:179] * Enabled addons: 
	I1019 12:47:12.106818  534438 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:47:12.107267  534438 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1019 12:47:12.107322  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1019 12:47:12.107380  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1019 12:47:12.141993  534438 cri.go:89] found id: "c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9"
	I1019 12:47:12.142020  534438 cri.go:89] found id: ""
	I1019 12:47:12.142031  534438 logs.go:282] 1 containers: [c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9]
	I1019 12:47:12.142091  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:12.146949  534438 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1019 12:47:12.147024  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1019 12:47:12.177373  534438 cri.go:89] found id: ""
	I1019 12:47:12.177399  534438 logs.go:282] 0 containers: []
	W1019 12:47:12.177409  534438 logs.go:284] No container was found matching "etcd"
	I1019 12:47:12.177417  534438 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1019 12:47:12.177481  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1019 12:47:12.209458  534438 cri.go:89] found id: ""
	I1019 12:47:12.209486  534438 logs.go:282] 0 containers: []
	W1019 12:47:12.209498  534438 logs.go:284] No container was found matching "coredns"
	I1019 12:47:12.209507  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1019 12:47:12.209578  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1019 12:47:12.246133  534438 cri.go:89] found id: "f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa"
	I1019 12:47:12.246160  534438 cri.go:89] found id: ""
	I1019 12:47:12.246172  534438 logs.go:282] 1 containers: [f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa]
	I1019 12:47:12.246234  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:12.250135  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1019 12:47:12.250202  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1019 12:47:12.279862  534438 cri.go:89] found id: ""
	I1019 12:47:12.279888  534438 logs.go:282] 0 containers: []
	W1019 12:47:12.279899  534438 logs.go:284] No container was found matching "kube-proxy"
	I1019 12:47:12.279919  534438 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1019 12:47:12.279978  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1019 12:47:12.316082  534438 cri.go:89] found id: "4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e"
	I1019 12:47:12.316155  534438 cri.go:89] found id: ""
	I1019 12:47:12.316183  534438 logs.go:282] 1 containers: [4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e]
	I1019 12:47:12.316284  534438 ssh_runner.go:195] Run: which crictl
	I1019 12:47:12.321372  534438 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1019 12:47:12.321523  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1019 12:47:12.355519  534438 cri.go:89] found id: ""
	I1019 12:47:12.355554  534438 logs.go:282] 0 containers: []
	W1019 12:47:12.355564  534438 logs.go:284] No container was found matching "kindnet"
	I1019 12:47:12.355572  534438 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1019 12:47:12.355626  534438 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1019 12:47:12.387412  534438 cri.go:89] found id: ""
	I1019 12:47:12.387457  534438 logs.go:282] 0 containers: []
	W1019 12:47:12.387474  534438 logs.go:284] No container was found matching "storage-provisioner"
	I1019 12:47:12.387490  534438 logs.go:123] Gathering logs for kube-controller-manager [4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e] ...
	I1019 12:47:12.387510  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b938a7911de4cb2e349b38ec7b144de6b73a0c870a61484830d9f08510dba7e"
	I1019 12:47:12.421884  534438 logs.go:123] Gathering logs for CRI-O ...
	I1019 12:47:12.421923  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1019 12:47:12.485178  534438 logs.go:123] Gathering logs for container status ...
	I1019 12:47:12.485209  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1019 12:47:12.521745  534438 logs.go:123] Gathering logs for kubelet ...
	I1019 12:47:12.521775  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1019 12:47:12.610389  534438 logs.go:123] Gathering logs for dmesg ...
	I1019 12:47:12.610430  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1019 12:47:12.626926  534438 logs.go:123] Gathering logs for describe nodes ...
	I1019 12:47:12.626953  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1019 12:47:12.683646  534438 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1019 12:47:12.683682  534438 logs.go:123] Gathering logs for kube-apiserver [c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9] ...
	I1019 12:47:12.683701  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c78bc920a42801578b0bac54eceb591d1c6d41418692fa6aa91dd5468d6b7fd9"
	I1019 12:47:12.717714  534438 logs.go:123] Gathering logs for kube-scheduler [f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa] ...
	I1019 12:47:12.717743  534438 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f6e929623a30ffa08fd1289c3c8b87ba78adf8201a3002db496c968689409baa"
	I1019 12:47:13.187859  566789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:47:13.187877  566789 addons.go:514] duration metric: took 3.925144ms for enable addons: enabled=[]
	I1019 12:47:13.301401  566789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:47:13.315570  566789 node_ready.go:35] waiting up to 6m0s for node "pause-513789" to be "Ready" ...
	I1019 12:47:13.324023  566789 node_ready.go:49] node "pause-513789" is "Ready"
	I1019 12:47:13.324056  566789 node_ready.go:38] duration metric: took 8.446932ms for node "pause-513789" to be "Ready" ...
	I1019 12:47:13.324074  566789 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:47:13.324126  566789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:47:13.338608  566789 api_server.go:72] duration metric: took 154.683606ms to wait for apiserver process to appear ...
	I1019 12:47:13.338648  566789 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:47:13.338675  566789 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1019 12:47:13.345028  566789 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1019 12:47:13.346309  566789 api_server.go:141] control plane version: v1.34.1
	I1019 12:47:13.346338  566789 api_server.go:131] duration metric: took 7.681294ms to wait for apiserver health ...
	I1019 12:47:13.346350  566789 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:47:13.351081  566789 system_pods.go:59] 7 kube-system pods found
	I1019 12:47:13.351113  566789 system_pods.go:61] "coredns-66bc5c9577-7zzkk" [ef0f8e6f-65f2-4fde-8175-2b4225113317] Running
	I1019 12:47:13.351121  566789 system_pods.go:61] "etcd-pause-513789" [2cc0d169-04c8-4fac-95c0-7b1a16495b1d] Running
	I1019 12:47:13.351126  566789 system_pods.go:61] "kindnet-ndk9h" [5e8161fd-e69c-49f1-8f05-35afc347c891] Running
	I1019 12:47:13.351132  566789 system_pods.go:61] "kube-apiserver-pause-513789" [1e828fa7-bbcd-4a8a-85aa-056fdf001c86] Running
	I1019 12:47:13.351138  566789 system_pods.go:61] "kube-controller-manager-pause-513789" [7b65f53e-f118-47a6-a06a-99d2f76f98f1] Running
	I1019 12:47:13.351143  566789 system_pods.go:61] "kube-proxy-nf888" [09ff1219-4f13-459f-b8a7-1296f69a528a] Running
	I1019 12:47:13.351147  566789 system_pods.go:61] "kube-scheduler-pause-513789" [02076936-95b4-464d-892a-11d38e0e1bb3] Running
	I1019 12:47:13.351156  566789 system_pods.go:74] duration metric: took 4.797542ms to wait for pod list to return data ...
	I1019 12:47:13.351186  566789 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:47:13.353457  566789 default_sa.go:45] found service account: "default"
	I1019 12:47:13.353480  566789 default_sa.go:55] duration metric: took 2.281714ms for default service account to be created ...
	I1019 12:47:13.353492  566789 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:47:13.357137  566789 system_pods.go:86] 7 kube-system pods found
	I1019 12:47:13.357164  566789 system_pods.go:89] "coredns-66bc5c9577-7zzkk" [ef0f8e6f-65f2-4fde-8175-2b4225113317] Running
	I1019 12:47:13.357171  566789 system_pods.go:89] "etcd-pause-513789" [2cc0d169-04c8-4fac-95c0-7b1a16495b1d] Running
	I1019 12:47:13.357183  566789 system_pods.go:89] "kindnet-ndk9h" [5e8161fd-e69c-49f1-8f05-35afc347c891] Running
	I1019 12:47:13.357189  566789 system_pods.go:89] "kube-apiserver-pause-513789" [1e828fa7-bbcd-4a8a-85aa-056fdf001c86] Running
	I1019 12:47:13.357195  566789 system_pods.go:89] "kube-controller-manager-pause-513789" [7b65f53e-f118-47a6-a06a-99d2f76f98f1] Running
	I1019 12:47:13.357200  566789 system_pods.go:89] "kube-proxy-nf888" [09ff1219-4f13-459f-b8a7-1296f69a528a] Running
	I1019 12:47:13.357206  566789 system_pods.go:89] "kube-scheduler-pause-513789" [02076936-95b4-464d-892a-11d38e0e1bb3] Running
	I1019 12:47:13.357219  566789 system_pods.go:126] duration metric: took 3.720277ms to wait for k8s-apps to be running ...
	I1019 12:47:13.357233  566789 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:47:13.357286  566789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:47:13.373599  566789 system_svc.go:56] duration metric: took 16.34003ms WaitForService to wait for kubelet
	I1019 12:47:13.373633  566789 kubeadm.go:586] duration metric: took 189.715191ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:47:13.373655  566789 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:47:13.376826  566789 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:47:13.376855  566789 node_conditions.go:123] node cpu capacity is 8
	I1019 12:47:13.376871  566789 node_conditions.go:105] duration metric: took 3.210526ms to run NodePressure ...
	I1019 12:47:13.376886  566789 start.go:241] waiting for startup goroutines ...
	I1019 12:47:13.376895  566789 start.go:246] waiting for cluster config update ...
	I1019 12:47:13.376907  566789 start.go:255] writing updated cluster config ...
	I1019 12:47:13.377229  566789 ssh_runner.go:195] Run: rm -f paused
	I1019 12:47:13.381249  566789 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:47:13.381897  566789 kapi.go:59] client config for pause-513789: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-351705/.minikube/profiles/pause-513789/client.key", CAFile:"/home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 12:47:13.384623  566789 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7zzkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.389573  566789 pod_ready.go:94] pod "coredns-66bc5c9577-7zzkk" is "Ready"
	I1019 12:47:13.389597  566789 pod_ready.go:86] duration metric: took 4.949851ms for pod "coredns-66bc5c9577-7zzkk" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.391518  566789 pod_ready.go:83] waiting for pod "etcd-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.395600  566789 pod_ready.go:94] pod "etcd-pause-513789" is "Ready"
	I1019 12:47:13.395619  566789 pod_ready.go:86] duration metric: took 4.082836ms for pod "etcd-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.397737  566789 pod_ready.go:83] waiting for pod "kube-apiserver-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.402489  566789 pod_ready.go:94] pod "kube-apiserver-pause-513789" is "Ready"
	I1019 12:47:13.402511  566789 pod_ready.go:86] duration metric: took 4.751393ms for pod "kube-apiserver-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.404566  566789 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.785680  566789 pod_ready.go:94] pod "kube-controller-manager-pause-513789" is "Ready"
	I1019 12:47:13.785716  566789 pod_ready.go:86] duration metric: took 381.123711ms for pod "kube-controller-manager-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:13.985860  566789 pod_ready.go:83] waiting for pod "kube-proxy-nf888" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:14.385705  566789 pod_ready.go:94] pod "kube-proxy-nf888" is "Ready"
	I1019 12:47:14.385734  566789 pod_ready.go:86] duration metric: took 399.849542ms for pod "kube-proxy-nf888" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:14.585897  566789 pod_ready.go:83] waiting for pod "kube-scheduler-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:14.985959  566789 pod_ready.go:94] pod "kube-scheduler-pause-513789" is "Ready"
	I1019 12:47:14.985992  566789 pod_ready.go:86] duration metric: took 400.063804ms for pod "kube-scheduler-pause-513789" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:47:14.986008  566789 pod_ready.go:40] duration metric: took 1.604728835s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:47:15.031675  566789 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:47:15.033499  566789 out.go:179] * Done! kubectl is now configured to use "pause-513789" cluster and "default" namespace by default
	I1019 12:47:12.502429  567019 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 12:47:12.502454  567019 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-352361 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 12:47:12.549270  567019 cli_runner.go:164] Run: docker container inspect NoKubernetes-352361 --format={{.State.Status}}
	I1019 12:47:12.568093  567019 machine.go:93] provisionDockerMachine start ...
	I1019 12:47:12.568212  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:12.587897  567019 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:12.588244  567019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1019 12:47:12.588274  567019 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:47:12.588984  567019 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57968->127.0.0.1:33410: read: connection reset by peer
	I1019 12:47:15.722316  567019 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-352361
	
	I1019 12:47:15.722353  567019 ubuntu.go:182] provisioning hostname "NoKubernetes-352361"
	I1019 12:47:15.722411  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:15.740588  567019 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:15.740815  567019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1019 12:47:15.740828  567019 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-352361 && echo "NoKubernetes-352361" | sudo tee /etc/hostname
	I1019 12:47:15.887305  567019 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-352361
	
	I1019 12:47:15.887395  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:15.908261  567019 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:15.908505  567019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1019 12:47:15.908524  567019 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-352361' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-352361/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-352361' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:47:16.045549  567019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:47:16.045589  567019 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:47:16.045637  567019 ubuntu.go:190] setting up certificates
	I1019 12:47:16.045672  567019 provision.go:84] configureAuth start
	I1019 12:47:16.045743  567019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-352361
	I1019 12:47:16.064335  567019 provision.go:143] copyHostCerts
	I1019 12:47:16.064371  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:47:16.064399  567019 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:47:16.064408  567019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:47:16.064517  567019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:47:16.064617  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:47:16.064637  567019 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:47:16.064655  567019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:47:16.064701  567019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:47:16.064758  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:47:16.064776  567019 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:47:16.064780  567019 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:47:16.064814  567019 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:47:16.064877  567019 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-352361 san=[127.0.0.1 192.168.94.2 NoKubernetes-352361 localhost minikube]
	I1019 12:47:16.292981  567019 provision.go:177] copyRemoteCerts
	I1019 12:47:16.293040  567019 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:47:16.293076  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:16.311370  567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa Username:docker}
	I1019 12:47:16.408089  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1019 12:47:16.408158  567019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:47:16.429066  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1019 12:47:16.429149  567019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 12:47:16.447703  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1019 12:47:16.447801  567019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:47:16.469778  567019 provision.go:87] duration metric: took 424.089418ms to configureAuth
	I1019 12:47:16.469810  567019 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:47:16.469996  567019 config.go:182] Loaded profile config "NoKubernetes-352361": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:47:16.470110  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:16.488342  567019 main.go:141] libmachine: Using SSH client type: native
	I1019 12:47:16.488630  567019 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1019 12:47:16.488657  567019 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:47:16.735935  567019 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:47:16.735960  567019 machine.go:96] duration metric: took 4.16784384s to provisionDockerMachine
	I1019 12:47:16.735973  567019 client.go:171] duration metric: took 9.0558178s to LocalClient.Create
	I1019 12:47:16.735998  567019 start.go:167] duration metric: took 9.055888318s to libmachine.API.Create "NoKubernetes-352361"
	I1019 12:47:16.736007  567019 start.go:293] postStartSetup for "NoKubernetes-352361" (driver="docker")
	I1019 12:47:16.736021  567019 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:47:16.736079  567019 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:47:16.736119  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:16.754587  567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa Username:docker}
	I1019 12:47:16.853578  567019 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:47:16.857194  567019 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:47:16.857221  567019 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:47:16.857232  567019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:47:16.857289  567019 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:47:16.857357  567019 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:47:16.857367  567019 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> /etc/ssl/certs/3552622.pem
	I1019 12:47:16.857466  567019 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:47:16.865017  567019 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:47:16.884810  567019 start.go:296] duration metric: took 148.78602ms for postStartSetup
	I1019 12:47:16.885193  567019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-352361
	I1019 12:47:16.903040  567019 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/NoKubernetes-352361/config.json ...
	I1019 12:47:16.903275  567019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:47:16.903333  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:16.921188  567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa Username:docker}
	I1019 12:47:17.016236  567019 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:47:17.021128  567019 start.go:128] duration metric: took 9.343039619s to createHost
	I1019 12:47:17.021160  567019 start.go:83] releasing machines lock for "NoKubernetes-352361", held for 9.343173181s
	I1019 12:47:17.021235  567019 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-352361
	I1019 12:47:17.039849  567019 ssh_runner.go:195] Run: cat /version.json
	I1019 12:47:17.039893  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:17.039927  567019 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:47:17.040001  567019 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-352361
	I1019 12:47:17.060356  567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa Username:docker}
	I1019 12:47:17.060580  567019 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/NoKubernetes-352361/id_rsa Username:docker}
	I1019 12:47:17.208234  567019 ssh_runner.go:195] Run: systemctl --version
	I1019 12:47:17.215181  567019 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:47:17.250344  567019 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:47:17.255213  567019 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:47:17.255269  567019 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:47:17.281978  567019 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 12:47:17.282001  567019 start.go:495] detecting cgroup driver to use...
	I1019 12:47:17.282029  567019 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:47:17.282074  567019 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:47:17.298331  567019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:47:17.310636  567019 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:47:17.310702  567019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:47:17.331699  567019 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:47:17.349218  567019 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:47:17.440104  567019 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:47:17.536827  567019 docker.go:234] disabling docker service ...
	I1019 12:47:17.536907  567019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:47:17.558191  567019 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:47:17.573304  567019 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:47:17.666242  567019 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:47:17.751668  567019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:47:17.764661  567019 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:47:17.781030  567019 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:47:17.781098  567019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:17.793775  567019 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:47:17.793854  567019 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:17.805547  567019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:17.815572  567019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:17.825550  567019 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:47:17.833687  567019 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:17.842832  567019 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:17.856797  567019 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:47:17.867503  567019 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:47:17.875806  567019 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:47:17.884376  567019 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:47:17.971864  567019 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:47:18.086160  567019 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:47:18.086229  567019 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:47:18.090200  567019 start.go:563] Will wait 60s for crictl version
	I1019 12:47:18.090252  567019 ssh_runner.go:195] Run: which crictl
	I1019 12:47:18.094273  567019 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:47:18.119193  567019 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:47:18.119262  567019 ssh_runner.go:195] Run: crio --version
	I1019 12:47:18.148106  567019 ssh_runner.go:195] Run: crio --version
	I1019 12:47:18.180278  567019 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.942442676Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.943319446Z" level=info msg="Conmon does support the --sync option"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.943337897Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.943356272Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.944132779Z" level=info msg="Conmon does support the --sync option"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.944149949Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.947800757Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.947818744Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.948271343Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.948634122Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.948685328Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.955143371Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.996162598Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-7zzkk Namespace:kube-system ID:0ae173fcca407acb8faab2f3ceab6f28241c08ea23f839f805341bd6656d1da1 UID:ef0f8e6f-65f2-4fde-8175-2b4225113317 NetNS:/var/run/netns/a2ec5cca-d77d-47b3-8268-d188d1986418 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000314708}] Aliases:map[]}"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.996380439Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-7zzkk for CNI network kindnet (type=ptp)"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.996909966Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.996934649Z" level=info msg="Starting seccomp notifier watcher"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.996987408Z" level=info msg="Create NRI interface"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997084481Z" level=info msg="built-in NRI default validator is disabled"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997101015Z" level=info msg="runtime interface created"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997114991Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997123485Z" level=info msg="runtime interface starting up..."
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997131766Z" level=info msg="starting plugins..."
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997144186Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 19 12:47:11 pause-513789 crio[2187]: time="2025-10-19T12:47:11.997474026Z" level=info msg="No systemd watchdog enabled"
	Oct 19 12:47:11 pause-513789 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	1ec0fc3c0ba24       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   0                   0ae173fcca407       coredns-66bc5c9577-7zzkk               kube-system
	14eb9e8442058       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   26 seconds ago      Running             kube-proxy                0                   3a3e42903412b       kube-proxy-nf888                       kube-system
	920deebc214e2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   26 seconds ago      Running             kindnet-cni               0                   1a5629f06bc75       kindnet-ndk9h                          kube-system
	0528c1143dc03       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago      Running             kube-controller-manager   0                   82d290076151f       kube-controller-manager-pause-513789   kube-system
	146dd0c10eabe       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago      Running             kube-apiserver            0                   086c289434acb       kube-apiserver-pause-513789            kube-system
	31890535538bc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago      Running             etcd                      0                   da42b8b5f87ea       etcd-pause-513789                      kube-system
	dcc8e66b624dd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   37 seconds ago      Running             kube-scheduler            0                   a9fa1c5cde3a0       kube-scheduler-pause-513789            kube-system
	
	
	==> coredns [1ec0fc3c0ba24d3e699dd67f8810fed4621f9c513043af45479a9c5d807702ff] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55131 - 11986 "HINFO IN 9083236024844154690.5669056859740369869. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091341844s
	
	
	==> describe nodes <==
	Name:               pause-513789
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-513789
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=pause-513789
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_46_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:46:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-513789
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:47:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:47:04 +0000   Sun, 19 Oct 2025 12:46:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:47:04 +0000   Sun, 19 Oct 2025 12:46:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:47:04 +0000   Sun, 19 Oct 2025 12:46:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:47:04 +0000   Sun, 19 Oct 2025 12:47:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-513789
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                716b4031-f39f-49a2-9750-0f1bb7ecc1c1
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-7zzkk                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-513789                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-ndk9h                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-513789             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-513789    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-nf888                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-513789             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node pause-513789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node pause-513789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x8 over 39s)  kubelet          Node pause-513789 status is now: NodeHasSufficientPID
	  Normal  Starting                 33s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s                kubelet          Node pause-513789 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s                kubelet          Node pause-513789 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s                kubelet          Node pause-513789 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node pause-513789 event: Registered Node pause-513789 in Controller
	  Normal  NodeReady                16s                kubelet          Node pause-513789 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 31 d3 aa 8a bd 08 06
	[  +0.000317] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff c6 bc e1 50 25 8b 08 06
	[Oct19 12:08] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.045444] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023837] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023888] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +2.047737] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[  +8.512033] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[Oct19 12:09] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	[ +32.252549] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 0a 76 74 91 b9 f5 4e f0 4b 62 76 45 08 00
	
	
	==> etcd [31890535538bc421b95363fbb2b2a58fc25aae9ac690403cf135ef78a607e96d] <==
	{"level":"warn","ts":"2025-10-19T12:46:44.052012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.062567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.071982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.084980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.094063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.102792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.118753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.126104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.138840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.144336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.154249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.163999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.172171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.181013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.188323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.196258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.204976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.211437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.219946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.234782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.242815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.249879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.258282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:46:44.327317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:47:11.640929Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.163864ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596480354682960 > lease_revoke:<id:06ed99fc820107d2>","response":"size:28"}
	
	
	==> kernel <==
	 12:47:20 up  2:29,  0 user,  load average: 3.67, 3.02, 1.96
	Linux pause-513789 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [920deebc214e2af14fcd54c5e9f2885245b1e6c033a03100dbc98aff69d1509a] <==
	I1019 12:46:53.412306       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:46:53.412589       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 12:46:53.412762       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:46:53.412779       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:46:53.412796       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:46:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:46:53.706686       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:46:53.706712       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:46:53.706730       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:46:53.709364       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:46:54.006849       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:46:54.006876       1 metrics.go:72] Registering metrics
	I1019 12:46:54.006954       1 controller.go:711] "Syncing nftables rules"
	I1019 12:47:03.708493       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:47:03.708550       1 main.go:301] handling current node
	I1019 12:47:13.713791       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:47:13.713822       1 main.go:301] handling current node
	
	
	==> kube-apiserver [146dd0c10eabe2f6580ce9036e41fb648b6c4762abbd294a65c9313f61ee9197] <==
	I1019 12:46:44.843315       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 12:46:44.843618       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 12:46:44.844984       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:46:44.849294       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:46:44.849411       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 12:46:44.857261       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:46:44.858162       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:46:45.040128       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:46:45.747332       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 12:46:45.751014       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 12:46:45.751033       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:46:46.215924       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:46:46.252862       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:46:46.353235       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 12:46:46.358789       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 12:46:46.359940       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:46:46.363875       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:46:46.805938       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:46:47.362334       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:46:47.372330       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 12:46:47.381654       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 12:46:52.510203       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:46:52.514687       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:46:52.807829       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:46:52.857395       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0528c1143dc0359455551f53f87509d4b20895517dfd2e448eeb029e9f2cbd59] <==
	I1019 12:46:51.764684       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 12:46:51.765845       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:46:51.804797       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 12:46:51.804829       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 12:46:51.804933       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 12:46:51.806162       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 12:46:51.806249       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 12:46:51.806260       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 12:46:51.806249       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 12:46:51.806324       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 12:46:51.806325       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 12:46:51.806436       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:46:51.806727       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 12:46:51.806857       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 12:46:51.809737       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 12:46:51.809768       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 12:46:51.809827       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 12:46:51.809878       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 12:46:51.809888       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 12:46:51.809894       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 12:46:51.810970       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:46:51.816118       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 12:46:51.817335       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-513789" podCIDRs=["10.244.0.0/24"]
	I1019 12:46:51.830901       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:47:06.758827       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [14eb9e844205895def56603a546e32a8ab831cb3660f127a0c21e7ebbe546d9d] <==
	I1019 12:46:53.262281       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:46:53.315498       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:46:53.415849       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:46:53.415914       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 12:46:53.416011       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:46:53.438518       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:46:53.438584       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:46:53.445824       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:46:53.446227       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:46:53.446250       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:46:53.447954       1 config.go:200] "Starting service config controller"
	I1019 12:46:53.448023       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:46:53.447974       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:46:53.447987       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:46:53.448018       1 config.go:309] "Starting node config controller"
	I1019 12:46:53.448062       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:46:53.448064       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:46:53.448055       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:46:53.448067       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:46:53.548488       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:46:53.548485       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:46:53.548536       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [dcc8e66b624ddc6e3f31455e07e4c4d89d8ed87a1176cf3098ab8c6a1a62bb01] <==
	E1019 12:46:44.806580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:46:44.806630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:46:44.806912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:46:44.806937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:46:44.806957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:46:44.806989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:46:44.806992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:46:44.807040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:46:44.807090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:46:44.807094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:46:44.807230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:46:44.807235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:46:44.807377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 12:46:44.807376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:46:45.616597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:46:45.698869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:46:45.744399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:46:45.757612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:46:45.793057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:46:45.882544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:46:45.944116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:46:45.988108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:46:46.004660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:46:46.072772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1019 12:46:49.203944       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:46:54 pause-513789 kubelet[1320]: I1019 12:46:54.250068    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nf888" podStartSLOduration=2.250063543 podStartE2EDuration="2.250063543s" podCreationTimestamp="2025-10-19 12:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:46:54.240467081 +0000 UTC m=+7.132736001" watchObservedRunningTime="2025-10-19 12:46:54.250063543 +0000 UTC m=+7.142332443"
	Oct 19 12:47:04 pause-513789 kubelet[1320]: I1019 12:47:04.083958    1320 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 12:47:04 pause-513789 kubelet[1320]: I1019 12:47:04.212950    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef0f8e6f-65f2-4fde-8175-2b4225113317-config-volume\") pod \"coredns-66bc5c9577-7zzkk\" (UID: \"ef0f8e6f-65f2-4fde-8175-2b4225113317\") " pod="kube-system/coredns-66bc5c9577-7zzkk"
	Oct 19 12:47:04 pause-513789 kubelet[1320]: I1019 12:47:04.212995    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48tr9\" (UniqueName: \"kubernetes.io/projected/ef0f8e6f-65f2-4fde-8175-2b4225113317-kube-api-access-48tr9\") pod \"coredns-66bc5c9577-7zzkk\" (UID: \"ef0f8e6f-65f2-4fde-8175-2b4225113317\") " pod="kube-system/coredns-66bc5c9577-7zzkk"
	Oct 19 12:47:05 pause-513789 kubelet[1320]: I1019 12:47:05.280359    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7zzkk" podStartSLOduration=12.280334008 podStartE2EDuration="12.280334008s" podCreationTimestamp="2025-10-19 12:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:47:05.268467155 +0000 UTC m=+18.160736061" watchObservedRunningTime="2025-10-19 12:47:05.280334008 +0000 UTC m=+18.172602908"
	Oct 19 12:47:09 pause-513789 kubelet[1320]: W1019 12:47:09.199908    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.200017    1320 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.200122    1320 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.200146    1320 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.200166    1320 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.265406    1320 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.265480    1320 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:09 pause-513789 kubelet[1320]: E1019 12:47:09.265494    1320 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:09 pause-513789 kubelet[1320]: W1019 12:47:09.300710    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 19 12:47:09 pause-513789 kubelet[1320]: W1019 12:47:09.481955    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 19 12:47:09 pause-513789 kubelet[1320]: W1019 12:47:09.772618    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 19 12:47:10 pause-513789 kubelet[1320]: W1019 12:47:10.177925    1320 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 19 12:47:10 pause-513789 kubelet[1320]: E1019 12:47:10.265894    1320 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 19 12:47:10 pause-513789 kubelet[1320]: E1019 12:47:10.265972    1320 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:10 pause-513789 kubelet[1320]: E1019 12:47:10.265999    1320 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 19 12:47:12 pause-513789 kubelet[1320]: E1019 12:47:12.217269    1320 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"
	Oct 19 12:47:15 pause-513789 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 12:47:15 pause-513789 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 12:47:15 pause-513789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 12:47:15 pause-513789 systemd[1]: kubelet.service: Consumed 1.199s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-513789 -n pause-513789
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-513789 -n pause-513789: exit status 2 (332.178753ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-513789 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-577062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-577062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (258.155856ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:51:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-577062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-577062 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-577062 describe deploy/metrics-server -n kube-system: exit status 1 (62.644047ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-577062 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-577062
helpers_test.go:243: (dbg) docker inspect old-k8s-version-577062:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da",
	        "Created": "2025-10-19T12:50:42.983195608Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 632050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:50:43.019182533Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da/hostname",
	        "HostsPath": "/var/lib/docker/containers/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da/hosts",
	        "LogPath": "/var/lib/docker/containers/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da-json.log",
	        "Name": "/old-k8s-version-577062",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-577062:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-577062",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da",
	                "LowerDir": "/var/lib/docker/overlay2/ad482f3956284773e120f9065cdd7f07802861d1771e61bb563b338ed1005a40-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad482f3956284773e120f9065cdd7f07802861d1771e61bb563b338ed1005a40/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad482f3956284773e120f9065cdd7f07802861d1771e61bb563b338ed1005a40/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad482f3956284773e120f9065cdd7f07802861d1771e61bb563b338ed1005a40/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-577062",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-577062/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-577062",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-577062",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-577062",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3918eddc20d94181057ecb802ab26f8ffa7d8abf21e56463b3754113c0c4edd2",
	            "SandboxKey": "/var/run/docker/netns/3918eddc20d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-577062": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:9d:11:1e:12:9e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "502db93731f3c65b158cfaea0389f311a4314988a15a727b3ce6c492ca19cd92",
	                    "EndpointID": "bbd271ae846e458894258574a5932b4954f709f49be44de1ca67bd46fa29eb1d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-577062",
	                        "368928979a17"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-577062 -n old-k8s-version-577062
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-577062 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-577062 logs -n 25: (1.036769812s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-931932 sudo systemctl status kubelet --all --full --no-pager                                                                       │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl cat kubelet --no-pager                                                                                       │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo journalctl -xeu kubelet --all --full --no-pager                                                                        │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /etc/kubernetes/kubelet.conf                                                                                       │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /var/lib/kubelet/config.yaml                                                                                       │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl status docker --all --full --no-pager                                                                        │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo systemctl cat docker --no-pager                                                                                        │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /etc/docker/daemon.json                                                                                            │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo docker system info                                                                                                     │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo systemctl status cri-docker --all --full --no-pager                                                                    │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo systemctl cat cri-docker --no-pager                                                                                    │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                               │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                         │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cri-dockerd --version                                                                                                  │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl status containerd --all --full --no-pager                                                                    │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo systemctl cat containerd --no-pager                                                                                    │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /lib/systemd/system/containerd.service                                                                             │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /etc/containerd/config.toml                                                                                        │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo containerd config dump                                                                                                 │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl status crio --all --full --no-pager                                                                          │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl cat crio --no-pager                                                                                          │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-577062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ old-k8s-version-577062 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo crio config                                                                                                            │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p bridge-931932                                                                                                                             │ bridge-931932          │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:51:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:51:06.853214  641657 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:51:06.853492  641657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:51:06.853502  641657 out.go:374] Setting ErrFile to fd 2...
	I1019 12:51:06.853507  641657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:51:06.853765  641657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:51:06.854323  641657 out.go:368] Setting JSON to false
	I1019 12:51:06.855557  641657 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9215,"bootTime":1760869052,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:51:06.855652  641657 start.go:141] virtualization: kvm guest
	I1019 12:51:06.857494  641657 out.go:179] * [embed-certs-123864] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:51:06.858714  641657 notify.go:220] Checking for updates...
	I1019 12:51:06.858757  641657 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:51:06.859977  641657 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:51:06.861044  641657 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:51:06.862238  641657 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:51:06.863383  641657 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:51:06.864488  641657 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1019 12:51:02.677831  617768 pod_ready.go:104] pod "coredns-66bc5c9577-hp4ql" is not "Ready", error: <nil>
	W1019 12:51:04.678601  617768 pod_ready.go:104] pod "coredns-66bc5c9577-hp4ql" is not "Ready", error: <nil>
	I1019 12:51:06.866193  641657 config.go:182] Loaded profile config "bridge-931932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:51:06.866346  641657 config.go:182] Loaded profile config "no-preload-561408": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:51:06.866485  641657 config.go:182] Loaded profile config "old-k8s-version-577062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 12:51:06.866593  641657 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:51:06.890608  641657 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:51:06.890712  641657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:51:06.949712  641657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:91 SystemTime:2025-10-19 12:51:06.938361454 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:51:06.949811  641657 docker.go:318] overlay module found
	I1019 12:51:06.951462  641657 out.go:179] * Using the docker driver based on user configuration
	I1019 12:51:06.952497  641657 start.go:305] selected driver: docker
	I1019 12:51:06.952512  641657 start.go:925] validating driver "docker" against <nil>
	I1019 12:51:06.952523  641657 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:51:06.953111  641657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:51:07.011092  641657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:91 SystemTime:2025-10-19 12:51:07.000815539 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:51:07.011288  641657 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:51:07.011536  641657 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:51:07.013308  641657 out.go:179] * Using Docker driver with root privileges
	I1019 12:51:07.014374  641657 cni.go:84] Creating CNI manager for ""
	I1019 12:51:07.014485  641657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:51:07.014503  641657 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:51:07.014582  641657 start.go:349] cluster config:
	{Name:embed-certs-123864 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:51:07.016085  641657 out.go:179] * Starting "embed-certs-123864" primary control-plane node in "embed-certs-123864" cluster
	I1019 12:51:07.017139  641657 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:51:07.018301  641657 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:51:07.019518  641657 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:51:07.019550  641657 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:51:07.019563  641657 cache.go:58] Caching tarball of preloaded images
	I1019 12:51:07.019629  641657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:51:07.019667  641657 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:51:07.019677  641657 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:51:07.019757  641657 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/config.json ...
	I1019 12:51:07.019774  641657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/config.json: {Name:mkcc97bcea2160e8acd825b96e5f847d4bb22b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:07.042979  641657 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:51:07.043002  641657 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:51:07.043019  641657 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:51:07.043042  641657 start.go:360] acquireMachinesLock for embed-certs-123864: {Name:mka6cb4ad88c794a0c6bc198cee02944cf3132f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:51:07.043133  641657 start.go:364] duration metric: took 74.93µs to acquireMachinesLock for "embed-certs-123864"
	I1019 12:51:07.043156  641657 start.go:93] Provisioning new machine with config: &{Name:embed-certs-123864 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:51:07.043243  641657 start.go:125] createHost starting for "" (driver="docker")
	I1019 12:51:05.527836  633056 out.go:252]   - Generating certificates and keys ...
	I1019 12:51:05.527966  633056 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 12:51:05.528064  633056 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1019 12:51:05.528195  633056 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 12:51:05.528297  633056 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 12:51:05.742276  633056 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 12:51:05.847020  633056 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 12:51:06.150896  633056 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 12:51:06.151057  633056 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-561408] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 12:51:06.222946  633056 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 12:51:06.223140  633056 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-561408] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 12:51:06.670643  633056 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 12:51:07.070712  633056 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 12:51:07.516312  633056 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 12:51:07.516460  633056 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 12:51:07.663455  633056 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 12:51:07.925722  633056 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 12:51:08.347274  633056 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 12:51:08.619235  633056 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 12:51:08.698321  633056 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 12:51:08.698801  633056 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 12:51:08.704792  633056 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1019 12:51:07.176897  617768 pod_ready.go:104] pod "coredns-66bc5c9577-hp4ql" is not "Ready", error: <nil>
	I1019 12:51:07.677983  617768 pod_ready.go:94] pod "coredns-66bc5c9577-hp4ql" is "Ready"
	I1019 12:51:07.678007  617768 pod_ready.go:86] duration metric: took 35.506239163s for pod "coredns-66bc5c9577-hp4ql" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:07.681633  617768 pod_ready.go:83] waiting for pod "etcd-bridge-931932" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:07.686219  617768 pod_ready.go:94] pod "etcd-bridge-931932" is "Ready"
	I1019 12:51:07.686242  617768 pod_ready.go:86] duration metric: took 4.586119ms for pod "etcd-bridge-931932" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:07.688386  617768 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-931932" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:07.692407  617768 pod_ready.go:94] pod "kube-apiserver-bridge-931932" is "Ready"
	I1019 12:51:07.692441  617768 pod_ready.go:86] duration metric: took 4.031194ms for pod "kube-apiserver-bridge-931932" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:07.694362  617768 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-931932" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:07.875956  617768 pod_ready.go:94] pod "kube-controller-manager-bridge-931932" is "Ready"
	I1019 12:51:07.875989  617768 pod_ready.go:86] duration metric: took 181.604354ms for pod "kube-controller-manager-bridge-931932" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:08.076493  617768 pod_ready.go:83] waiting for pod "kube-proxy-dddxz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:08.476151  617768 pod_ready.go:94] pod "kube-proxy-dddxz" is "Ready"
	I1019 12:51:08.476174  617768 pod_ready.go:86] duration metric: took 399.655982ms for pod "kube-proxy-dddxz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:08.676728  617768 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-931932" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:09.075894  617768 pod_ready.go:94] pod "kube-scheduler-bridge-931932" is "Ready"
	I1019 12:51:09.075923  617768 pod_ready.go:86] duration metric: took 399.17045ms for pod "kube-scheduler-bridge-931932" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:09.075934  617768 pod_ready.go:40] duration metric: took 36.908274117s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:51:09.130442  617768 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:51:09.134038  617768 out.go:179] * Done! kubectl is now configured to use "bridge-931932" cluster and "default" namespace by default
	I1019 12:51:08.707517  633056 out.go:252]   - Booting up control plane ...
	I1019 12:51:08.707655  633056 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 12:51:08.707770  633056 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 12:51:08.707868  633056 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 12:51:08.724821  633056 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 12:51:08.725006  633056 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 12:51:08.734504  633056 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 12:51:08.734704  633056 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 12:51:08.734783  633056 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 12:51:08.850205  633056 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 12:51:08.850384  633056 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 12:51:09.353723  633056 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 503.385944ms
	I1019 12:51:09.362507  633056 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 12:51:09.362944  633056 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1019 12:51:09.363405  633056 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 12:51:09.363536  633056 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 12:51:06.703091  629457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:07.203432  629457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:07.703503  629457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:08.203203  629457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:08.703228  629457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:09.203457  629457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:09.702565  629457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:10.202545  629457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:10.703284  629457 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:11.001176  629457 kubeadm.go:1113] duration metric: took 11.901880337s to wait for elevateKubeSystemPrivileges
	I1019 12:51:11.001329  629457 kubeadm.go:402] duration metric: took 23.390269629s to StartCluster
	I1019 12:51:11.001355  629457 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:11.001682  629457 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:51:11.003475  629457 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:11.090247  629457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:51:11.090268  629457 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:51:11.090360  629457 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:51:11.091453  629457 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-577062"
	I1019 12:51:11.090484  629457 config.go:182] Loaded profile config "old-k8s-version-577062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 12:51:11.091485  629457 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-577062"
	I1019 12:51:11.091520  629457 host.go:66] Checking if "old-k8s-version-577062" exists ...
	I1019 12:51:11.091531  629457 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-577062"
	I1019 12:51:11.091546  629457 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-577062"
	I1019 12:51:11.091943  629457 cli_runner.go:164] Run: docker container inspect old-k8s-version-577062 --format={{.State.Status}}
	I1019 12:51:11.092042  629457 cli_runner.go:164] Run: docker container inspect old-k8s-version-577062 --format={{.State.Status}}
	I1019 12:51:11.132911  629457 out.go:179] * Verifying Kubernetes components...
	I1019 12:51:11.133241  629457 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-577062"
	I1019 12:51:11.134534  629457 host.go:66] Checking if "old-k8s-version-577062" exists ...
	I1019 12:51:11.135054  629457 cli_runner.go:164] Run: docker container inspect old-k8s-version-577062 --format={{.State.Status}}
	I1019 12:51:11.157329  629457 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:51:11.157362  629457 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:51:11.157445  629457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-577062
	I1019 12:51:11.158327  629457 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:51:11.186587  629457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/old-k8s-version-577062/id_rsa Username:docker}
	I1019 12:51:11.222598  629457 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:51:07.045111  641657 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 12:51:07.045332  641657 start.go:159] libmachine.API.Create for "embed-certs-123864" (driver="docker")
	I1019 12:51:07.045358  641657 client.go:168] LocalClient.Create starting
	I1019 12:51:07.045446  641657 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem
	I1019 12:51:07.045476  641657 main.go:141] libmachine: Decoding PEM data...
	I1019 12:51:07.045494  641657 main.go:141] libmachine: Parsing certificate...
	I1019 12:51:07.045561  641657 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem
	I1019 12:51:07.045582  641657 main.go:141] libmachine: Decoding PEM data...
	I1019 12:51:07.045589  641657 main.go:141] libmachine: Parsing certificate...
	I1019 12:51:07.045909  641657 cli_runner.go:164] Run: docker network inspect embed-certs-123864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:51:07.062956  641657 cli_runner.go:211] docker network inspect embed-certs-123864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:51:07.063038  641657 network_create.go:284] running [docker network inspect embed-certs-123864] to gather additional debugging logs...
	I1019 12:51:07.063057  641657 cli_runner.go:164] Run: docker network inspect embed-certs-123864
	W1019 12:51:07.079692  641657 cli_runner.go:211] docker network inspect embed-certs-123864 returned with exit code 1
	I1019 12:51:07.079719  641657 network_create.go:287] error running [docker network inspect embed-certs-123864]: docker network inspect embed-certs-123864: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-123864 not found
	I1019 12:51:07.079747  641657 network_create.go:289] output of [docker network inspect embed-certs-123864]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-123864 not found
	
	** /stderr **
	I1019 12:51:07.079845  641657 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:51:07.096661  641657 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a4629926c406 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:8c:3f:62:13:f6} reservation:<nil>}
	I1019 12:51:07.097392  641657 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6cccd776798e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:1b:39:ab:6e:7b} reservation:<nil>}
	I1019 12:51:07.097873  641657 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-91914a6ce07e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:1c:aa:a8:a4:4a} reservation:<nil>}
	I1019 12:51:07.098685  641657 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed79c0}
	I1019 12:51:07.098707  641657 network_create.go:124] attempt to create docker network embed-certs-123864 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1019 12:51:07.098750  641657 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-123864 embed-certs-123864
	I1019 12:51:07.158760  641657 network_create.go:108] docker network embed-certs-123864 192.168.76.0/24 created
	I1019 12:51:07.158793  641657 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-123864" container
	I1019 12:51:07.158845  641657 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:51:07.177981  641657 cli_runner.go:164] Run: docker volume create embed-certs-123864 --label name.minikube.sigs.k8s.io=embed-certs-123864 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:51:07.196254  641657 oci.go:103] Successfully created a docker volume embed-certs-123864
	I1019 12:51:07.196340  641657 cli_runner.go:164] Run: docker run --rm --name embed-certs-123864-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-123864 --entrypoint /usr/bin/test -v embed-certs-123864:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:51:07.633854  641657 oci.go:107] Successfully prepared a docker volume embed-certs-123864
	I1019 12:51:07.633906  641657 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:51:07.633932  641657 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:51:07.634004  641657 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-123864:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1019 12:51:11.279557  629457 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:51:11.279592  629457 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:51:11.279658  629457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-577062
	I1019 12:51:11.302524  629457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/old-k8s-version-577062/id_rsa Username:docker}
	I1019 12:51:11.344587  629457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:51:11.356565  629457 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:51:11.370873  629457 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:51:11.499008  629457 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:51:12.003100  629457 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1019 12:51:12.004177  629457 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-577062" to be "Ready" ...
	I1019 12:51:12.517145  629457 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-577062" context rescaled to 1 replicas
	I1019 12:51:12.615012  629457 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.115945008s)
	I1019 12:51:12.627802  629457 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1019 12:51:10.987721  633056 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.625397642s
	I1019 12:51:12.574691  633056 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.211899339s
	I1019 12:51:14.365409  633056 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.003122771s
	I1019 12:51:14.381678  633056 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 12:51:14.393843  633056 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 12:51:14.404716  633056 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 12:51:14.405190  633056 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-561408 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 12:51:14.414761  633056 kubeadm.go:318] [bootstrap-token] Using token: teg89v.er9x6ru2zigs3ldd
	I1019 12:51:14.415947  633056 out.go:252]   - Configuring RBAC rules ...
	I1019 12:51:14.416101  633056 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 12:51:14.420379  633056 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 12:51:14.427060  633056 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 12:51:14.431220  633056 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 12:51:14.434190  633056 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 12:51:14.437720  633056 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 12:51:14.773124  633056 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 12:51:15.189659  633056 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 12:51:15.773019  633056 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 12:51:15.773945  633056 kubeadm.go:318] 
	I1019 12:51:15.774036  633056 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 12:51:15.774060  633056 kubeadm.go:318] 
	I1019 12:51:15.774168  633056 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 12:51:15.774177  633056 kubeadm.go:318] 
	I1019 12:51:15.774238  633056 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 12:51:15.774338  633056 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 12:51:15.774402  633056 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 12:51:15.774412  633056 kubeadm.go:318] 
	I1019 12:51:15.774502  633056 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 12:51:15.774510  633056 kubeadm.go:318] 
	I1019 12:51:15.774563  633056 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 12:51:15.774585  633056 kubeadm.go:318] 
	I1019 12:51:15.774659  633056 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 12:51:15.774777  633056 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 12:51:15.774881  633056 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 12:51:15.774894  633056 kubeadm.go:318] 
	I1019 12:51:15.775017  633056 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 12:51:15.775124  633056 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 12:51:15.775133  633056 kubeadm.go:318] 
	I1019 12:51:15.775269  633056 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token teg89v.er9x6ru2zigs3ldd \
	I1019 12:51:15.775415  633056 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 \
	I1019 12:51:15.775460  633056 kubeadm.go:318] 	--control-plane 
	I1019 12:51:15.775467  633056 kubeadm.go:318] 
	I1019 12:51:15.775601  633056 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 12:51:15.775617  633056 kubeadm.go:318] 
	I1019 12:51:15.775700  633056 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token teg89v.er9x6ru2zigs3ldd \
	I1019 12:51:15.775837  633056 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 
	I1019 12:51:15.778209  633056 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 12:51:15.778313  633056 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 12:51:15.778349  633056 cni.go:84] Creating CNI manager for ""
	I1019 12:51:15.778362  633056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:51:15.780081  633056 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 12:51:12.632992  629457 addons.go:514] duration metric: took 1.542622272s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1019 12:51:14.012646  629457 node_ready.go:57] node "old-k8s-version-577062" has "Ready":"False" status (will retry)
	I1019 12:51:12.461621  641657 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-123864:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.827546497s)
	I1019 12:51:12.461662  641657 kic.go:203] duration metric: took 4.827726125s to extract preloaded images to volume ...
	W1019 12:51:12.461778  641657 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 12:51:12.461816  641657 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 12:51:12.462215  641657 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 12:51:12.577399  641657 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-123864 --name embed-certs-123864 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-123864 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-123864 --network embed-certs-123864 --ip 192.168.76.2 --volume embed-certs-123864:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 12:51:12.980931  641657 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Running}}
	I1019 12:51:13.004389  641657 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:51:13.028801  641657 cli_runner.go:164] Run: docker exec embed-certs-123864 stat /var/lib/dpkg/alternatives/iptables
	I1019 12:51:13.085013  641657 oci.go:144] the created container "embed-certs-123864" has a running status.
	I1019 12:51:13.085052  641657 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa...
	I1019 12:51:13.828537  641657 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 12:51:13.866408  641657 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:51:13.892688  641657 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 12:51:13.892722  641657 kic_runner.go:114] Args: [docker exec --privileged embed-certs-123864 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 12:51:13.960543  641657 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:51:13.985164  641657 machine.go:93] provisionDockerMachine start ...
	I1019 12:51:13.985274  641657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:51:14.011470  641657 main.go:141] libmachine: Using SSH client type: native
	I1019 12:51:14.012105  641657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1019 12:51:14.012174  641657 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:51:14.168494  641657 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-123864
	
	I1019 12:51:14.168850  641657 ubuntu.go:182] provisioning hostname "embed-certs-123864"
	I1019 12:51:14.168951  641657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:51:14.196389  641657 main.go:141] libmachine: Using SSH client type: native
	I1019 12:51:14.196707  641657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1019 12:51:14.196762  641657 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-123864 && echo "embed-certs-123864" | sudo tee /etc/hostname
	I1019 12:51:14.375558  641657 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-123864
	
	I1019 12:51:14.375677  641657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:51:14.399849  641657 main.go:141] libmachine: Using SSH client type: native
	I1019 12:51:14.400162  641657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1019 12:51:14.400197  641657 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-123864' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-123864/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-123864' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:51:14.554096  641657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:51:14.554135  641657 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:51:14.554192  641657 ubuntu.go:190] setting up certificates
	I1019 12:51:14.554208  641657 provision.go:84] configureAuth start
	I1019 12:51:14.554281  641657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:51:14.577651  641657 provision.go:143] copyHostCerts
	I1019 12:51:14.577734  641657 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:51:14.577752  641657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:51:14.577830  641657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:51:14.577952  641657 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:51:14.577964  641657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:51:14.578005  641657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:51:14.578085  641657 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:51:14.578096  641657 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:51:14.578132  641657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:51:14.578207  641657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.embed-certs-123864 san=[127.0.0.1 192.168.76.2 embed-certs-123864 localhost minikube]
	I1019 12:51:14.770196  641657 provision.go:177] copyRemoteCerts
	I1019 12:51:14.770266  641657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:51:14.770320  641657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:51:14.790659  641657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:51:14.897270  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 12:51:14.920308  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:51:14.941885  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:51:14.964587  641657 provision.go:87] duration metric: took 410.365652ms to configureAuth
	I1019 12:51:14.964637  641657 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:51:14.964831  641657 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:51:14.964954  641657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:51:14.985995  641657 main.go:141] libmachine: Using SSH client type: native
	I1019 12:51:14.986300  641657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33470 <nil> <nil>}
	I1019 12:51:14.986326  641657 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:51:15.267766  641657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:51:15.267797  641657 machine.go:96] duration metric: took 1.282601819s to provisionDockerMachine
	I1019 12:51:15.267809  641657 client.go:171] duration metric: took 8.222445031s to LocalClient.Create
	I1019 12:51:15.267836  641657 start.go:167] duration metric: took 8.22251436s to libmachine.API.Create "embed-certs-123864"
	I1019 12:51:15.267846  641657 start.go:293] postStartSetup for "embed-certs-123864" (driver="docker")
	I1019 12:51:15.267860  641657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:51:15.267937  641657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:51:15.268057  641657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:51:15.288965  641657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:51:15.393233  641657 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:51:15.396964  641657 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:51:15.397004  641657 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:51:15.397015  641657 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:51:15.397061  641657 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:51:15.397139  641657 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:51:15.397233  641657 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:51:15.404756  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:51:15.425871  641657 start.go:296] duration metric: took 158.006551ms for postStartSetup
	I1019 12:51:15.426235  641657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:51:15.443891  641657 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/config.json ...
	I1019 12:51:15.444250  641657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:51:15.444309  641657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:51:15.462837  641657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:51:15.558992  641657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:51:15.564890  641657 start.go:128] duration metric: took 8.521631773s to createHost
	I1019 12:51:15.564912  641657 start.go:83] releasing machines lock for "embed-certs-123864", held for 8.521768777s
	I1019 12:51:15.564984  641657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:51:15.584294  641657 ssh_runner.go:195] Run: cat /version.json
	I1019 12:51:15.584354  641657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:51:15.584363  641657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:51:15.584435  641657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:51:15.604834  641657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:51:15.605320  641657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:51:15.753385  641657 ssh_runner.go:195] Run: systemctl --version
	I1019 12:51:15.760386  641657 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:51:15.799992  641657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:51:15.804884  641657 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:51:15.804964  641657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:51:15.831872  641657 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 12:51:15.831893  641657 start.go:495] detecting cgroup driver to use...
	I1019 12:51:15.831930  641657 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:51:15.831983  641657 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:51:15.848546  641657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:51:15.861947  641657 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:51:15.862010  641657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:51:15.884105  641657 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:51:15.904969  641657 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:51:15.999973  641657 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:51:16.125727  641657 docker.go:234] disabling docker service ...
	I1019 12:51:16.125803  641657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:51:16.146609  641657 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:51:16.161117  641657 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:51:16.250201  641657 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:51:16.340469  641657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:51:16.353051  641657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:51:16.367878  641657 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:51:16.367955  641657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:51:16.378478  641657 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:51:16.378533  641657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:51:16.387636  641657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:51:16.396364  641657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:51:16.405198  641657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:51:16.414469  641657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:51:16.423402  641657 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:51:16.438505  641657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:51:16.447712  641657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:51:16.457523  641657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:51:16.465749  641657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:51:16.550236  641657 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:51:16.920407  641657 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:51:16.920514  641657 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:51:16.924612  641657 start.go:563] Will wait 60s for crictl version
	I1019 12:51:16.924662  641657 ssh_runner.go:195] Run: which crictl
	I1019 12:51:16.928221  641657 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:51:16.953030  641657 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:51:16.953110  641657 ssh_runner.go:195] Run: crio --version
	I1019 12:51:16.982151  641657 ssh_runner.go:195] Run: crio --version
	I1019 12:51:17.013415  641657 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:51:17.014677  641657 cli_runner.go:164] Run: docker network inspect embed-certs-123864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:51:17.032800  641657 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 12:51:17.037262  641657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:51:17.048289  641657 kubeadm.go:883] updating cluster {Name:embed-certs-123864 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:51:17.048433  641657 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:51:17.048499  641657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:51:17.082823  641657 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:51:17.082844  641657 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:51:17.082888  641657 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:51:17.110144  641657 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:51:17.110165  641657 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:51:17.110172  641657 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 12:51:17.110253  641657 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-123864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:51:17.110314  641657 ssh_runner.go:195] Run: crio config
	I1019 12:51:17.157370  641657 cni.go:84] Creating CNI manager for ""
	I1019 12:51:17.157394  641657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:51:17.157411  641657 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:51:17.157473  641657 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-123864 NodeName:embed-certs-123864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:51:17.157654  641657 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-123864"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:51:17.157729  641657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:51:17.167553  641657 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:51:17.167619  641657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:51:17.176067  641657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1019 12:51:17.189213  641657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:51:17.206449  641657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1019 12:51:17.220308  641657 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:51:17.223998  641657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:51:17.234237  641657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:51:17.319215  641657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:51:17.344789  641657 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864 for IP: 192.168.76.2
	I1019 12:51:17.344814  641657 certs.go:195] generating shared ca certs ...
	I1019 12:51:17.344834  641657 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:17.344964  641657 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:51:17.345005  641657 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:51:17.345015  641657 certs.go:257] generating profile certs ...
	I1019 12:51:17.345065  641657 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/client.key
	I1019 12:51:17.345085  641657 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/client.crt with IP's: []
	I1019 12:51:17.479866  641657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/client.crt ...
	I1019 12:51:17.479894  641657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/client.crt: {Name:mk789d1c3981290257ed51013026bccb8ed981d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:17.480107  641657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/client.key ...
	I1019 12:51:17.480123  641657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/client.key: {Name:mk60f668a93b50836f4c432720fa329367fe25aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:17.480237  641657 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key.ef142c6b
	I1019 12:51:17.480267  641657 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.crt.ef142c6b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1019 12:51:17.981064  641657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.crt.ef142c6b ...
	I1019 12:51:17.981093  641657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.crt.ef142c6b: {Name:mkc3c6a646b848fb187facbc8f68d8b49d0b678e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:17.981268  641657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key.ef142c6b ...
	I1019 12:51:17.981280  641657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key.ef142c6b: {Name:mkb298b7b41d40a9e820dd9160926b864c9a9613 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:17.981357  641657 certs.go:382] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.crt.ef142c6b -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.crt
	I1019 12:51:17.981455  641657 certs.go:386] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key.ef142c6b -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key
	I1019 12:51:17.981530  641657 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.key
	I1019 12:51:17.981547  641657 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.crt with IP's: []
	I1019 12:51:18.178562  641657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.crt ...
	I1019 12:51:18.178606  641657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.crt: {Name:mk159fc4824ba06b72a2886e2272d14bb3380ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:18.178813  641657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.key ...
	I1019 12:51:18.178832  641657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.key: {Name:mkf9c6d95cca93400c2b9f8ff06a8dc988efeb16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:18.179086  641657 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:51:18.179137  641657 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:51:18.179152  641657 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:51:18.179182  641657 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:51:18.179213  641657 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:51:18.179244  641657 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:51:18.179299  641657 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:51:18.180216  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:51:18.201061  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:51:18.220925  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:51:18.239438  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:51:18.257722  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 12:51:18.276496  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:51:18.294047  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:51:18.312610  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:51:18.330675  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:51:18.350242  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:51:18.368099  641657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:51:18.385686  641657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:51:18.398754  641657 ssh_runner.go:195] Run: openssl version
	I1019 12:51:18.404684  641657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:51:18.414567  641657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:51:18.418608  641657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:51:18.418656  641657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:51:18.455130  641657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:51:18.464493  641657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:51:18.473058  641657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:51:18.476986  641657 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:51:18.477053  641657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:51:18.512760  641657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:51:18.522192  641657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:51:18.532031  641657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:51:18.536010  641657 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:51:18.536085  641657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:51:18.573175  641657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:51:18.582248  641657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:51:18.586151  641657 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 12:51:18.586227  641657 kubeadm.go:400] StartCluster: {Name:embed-certs-123864 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:51:18.586302  641657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:51:18.586342  641657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:51:18.613988  641657 cri.go:89] found id: ""
	I1019 12:51:18.614049  641657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:51:18.622275  641657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:51:18.630516  641657 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 12:51:18.630568  641657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:51:18.638668  641657 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 12:51:18.638682  641657 kubeadm.go:157] found existing configuration files:
	
	I1019 12:51:18.638718  641657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 12:51:18.646389  641657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 12:51:18.646480  641657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 12:51:18.654012  641657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 12:51:18.662464  641657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 12:51:18.662522  641657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:51:18.670837  641657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 12:51:18.678613  641657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 12:51:18.678659  641657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:51:18.686186  641657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 12:51:18.694738  641657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 12:51:18.694796  641657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:51:18.702996  641657 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 12:51:18.746745  641657 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 12:51:18.746825  641657 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 12:51:18.770195  641657 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 12:51:18.770332  641657 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 12:51:18.770391  641657 kubeadm.go:318] OS: Linux
	I1019 12:51:18.770504  641657 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 12:51:18.770576  641657 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 12:51:18.770660  641657 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 12:51:18.770728  641657 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 12:51:18.770788  641657 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 12:51:18.770859  641657 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 12:51:18.770926  641657 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 12:51:18.770989  641657 kubeadm.go:318] CGROUPS_IO: enabled
	I1019 12:51:18.828982  641657 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 12:51:18.829107  641657 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 12:51:18.829223  641657 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 12:51:18.837937  641657 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 12:51:15.781362  633056 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 12:51:15.785987  633056 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 12:51:15.786006  633056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 12:51:15.801120  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 12:51:16.045449  633056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:51:16.045609  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:16.045738  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-561408 minikube.k8s.io/updated_at=2025_10_19T12_51_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=no-preload-561408 minikube.k8s.io/primary=true
	I1019 12:51:16.141407  633056 ops.go:34] apiserver oom_adj: -16
	I1019 12:51:16.141417  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:16.641580  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:17.142249  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:17.641547  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:18.142206  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:18.642333  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:19.141850  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:19.641792  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:20.142308  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:20.642113  633056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:20.719831  633056 kubeadm.go:1113] duration metric: took 4.674268281s to wait for elevateKubeSystemPrivileges
	I1019 12:51:20.719870  633056 kubeadm.go:402] duration metric: took 15.811298169s to StartCluster
	I1019 12:51:20.719896  633056 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:20.719975  633056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:51:20.721780  633056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:20.722089  633056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:51:20.722089  633056 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:51:20.722192  633056 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:51:20.722285  633056 addons.go:69] Setting storage-provisioner=true in profile "no-preload-561408"
	I1019 12:51:20.722302  633056 addons.go:238] Setting addon storage-provisioner=true in "no-preload-561408"
	I1019 12:51:20.722304  633056 config.go:182] Loaded profile config "no-preload-561408": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:51:20.722337  633056 host.go:66] Checking if "no-preload-561408" exists ...
	I1019 12:51:20.722355  633056 addons.go:69] Setting default-storageclass=true in profile "no-preload-561408"
	I1019 12:51:20.722369  633056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-561408"
	I1019 12:51:20.722768  633056 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:51:20.723129  633056 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:51:20.726549  633056 out.go:179] * Verifying Kubernetes components...
	I1019 12:51:20.728204  633056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:51:20.748032  633056 addons.go:238] Setting addon default-storageclass=true in "no-preload-561408"
	I1019 12:51:20.748090  633056 host.go:66] Checking if "no-preload-561408" exists ...
	I1019 12:51:20.748562  633056 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:51:20.750709  633056 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1019 12:51:16.508269  629457 node_ready.go:57] node "old-k8s-version-577062" has "Ready":"False" status (will retry)
	W1019 12:51:19.007252  629457 node_ready.go:57] node "old-k8s-version-577062" has "Ready":"False" status (will retry)
	W1019 12:51:21.008373  629457 node_ready.go:57] node "old-k8s-version-577062" has "Ready":"False" status (will retry)
	I1019 12:51:20.752028  633056 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:51:20.752051  633056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:51:20.752109  633056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:51:20.782376  633056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:51:20.785031  633056 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:51:20.785052  633056 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:51:20.785124  633056 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:51:20.812508  633056 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:51:20.840687  633056 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:51:20.880634  633056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:51:20.901812  633056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:51:20.934008  633056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:51:21.064362  633056 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1019 12:51:21.066561  633056 node_ready.go:35] waiting up to 6m0s for node "no-preload-561408" to be "Ready" ...
	I1019 12:51:21.278218  633056 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 12:51:18.841414  641657 out.go:252]   - Generating certificates and keys ...
	I1019 12:51:18.841533  641657 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 12:51:18.841608  641657 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1019 12:51:19.319356  641657 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 12:51:19.363406  641657 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 12:51:19.578611  641657 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 12:51:19.820287  641657 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 12:51:19.887501  641657 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 12:51:19.887661  641657 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-123864 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 12:51:20.150627  641657 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 12:51:20.150826  641657 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-123864 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1019 12:51:20.453641  641657 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 12:51:20.881392  641657 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 12:51:21.503887  641657 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 12:51:21.503995  641657 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 12:51:21.585173  641657 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 12:51:21.913685  641657 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 12:51:22.031203  641657 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 12:51:22.296385  641657 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 12:51:23.537048  641657 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 12:51:23.537737  641657 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 12:51:23.542254  641657 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 12:51:21.279691  633056 addons.go:514] duration metric: took 557.475955ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 12:51:21.568184  633056 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-561408" context rescaled to 1 replicas
	W1019 12:51:23.071025  633056 node_ready.go:57] node "no-preload-561408" has "Ready":"False" status (will retry)
	W1019 12:51:23.008494  629457 node_ready.go:57] node "old-k8s-version-577062" has "Ready":"False" status (will retry)
	W1019 12:51:25.507543  629457 node_ready.go:57] node "old-k8s-version-577062" has "Ready":"False" status (will retry)
	I1019 12:51:26.010268  629457 node_ready.go:49] node "old-k8s-version-577062" is "Ready"
	I1019 12:51:26.010302  629457 node_ready.go:38] duration metric: took 14.006095198s for node "old-k8s-version-577062" to be "Ready" ...
	I1019 12:51:26.010322  629457 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:51:26.010379  629457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:51:26.027630  629457 api_server.go:72] duration metric: took 14.937306581s to wait for apiserver process to appear ...
	I1019 12:51:26.027721  629457 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:51:26.027755  629457 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1019 12:51:26.033267  629457 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1019 12:51:26.034400  629457 api_server.go:141] control plane version: v1.28.0
	I1019 12:51:26.034436  629457 api_server.go:131] duration metric: took 6.69589ms to wait for apiserver health ...
	I1019 12:51:26.034447  629457 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:51:26.038205  629457 system_pods.go:59] 8 kube-system pods found
	I1019 12:51:26.038244  629457 system_pods.go:61] "coredns-5dd5756b68-44mqv" [360fd17f-a1ea-4400-85fa-dd78ab44fcbc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:51:26.038251  629457 system_pods.go:61] "etcd-old-k8s-version-577062" [1561017e-3d8c-4abb-b580-ea4eac44212a] Running
	I1019 12:51:26.038265  629457 system_pods.go:61] "kindnet-2h26b" [357fe2d6-42b8-4f53-aa84-9fde0f804ee8] Running
	I1019 12:51:26.038271  629457 system_pods.go:61] "kube-apiserver-old-k8s-version-577062" [836bda6f-5d8c-4bbc-833c-c563da74cbbb] Running
	I1019 12:51:26.038277  629457 system_pods.go:61] "kube-controller-manager-old-k8s-version-577062" [444afdc9-ca27-4986-9684-d3b8c191a406] Running
	I1019 12:51:26.038281  629457 system_pods.go:61] "kube-proxy-lhths" [3dba9194-393b-4f18-a6e5-057bd803c642] Running
	I1019 12:51:26.038286  629457 system_pods.go:61] "kube-scheduler-old-k8s-version-577062" [12c61412-0e63-4451-8b6d-70992b408f0b] Running
	I1019 12:51:26.038293  629457 system_pods.go:61] "storage-provisioner" [f97edd8d-a3ad-4339-a4c6-99bc764b5534] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:51:26.038301  629457 system_pods.go:74] duration metric: took 3.846373ms to wait for pod list to return data ...
	I1019 12:51:26.038313  629457 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:51:26.040648  629457 default_sa.go:45] found service account: "default"
	I1019 12:51:26.040673  629457 default_sa.go:55] duration metric: took 2.352924ms for default service account to be created ...
	I1019 12:51:26.040684  629457 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:51:26.044326  629457 system_pods.go:86] 8 kube-system pods found
	I1019 12:51:26.044364  629457 system_pods.go:89] "coredns-5dd5756b68-44mqv" [360fd17f-a1ea-4400-85fa-dd78ab44fcbc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:51:26.044370  629457 system_pods.go:89] "etcd-old-k8s-version-577062" [1561017e-3d8c-4abb-b580-ea4eac44212a] Running
	I1019 12:51:26.044378  629457 system_pods.go:89] "kindnet-2h26b" [357fe2d6-42b8-4f53-aa84-9fde0f804ee8] Running
	I1019 12:51:26.044383  629457 system_pods.go:89] "kube-apiserver-old-k8s-version-577062" [836bda6f-5d8c-4bbc-833c-c563da74cbbb] Running
	I1019 12:51:26.044396  629457 system_pods.go:89] "kube-controller-manager-old-k8s-version-577062" [444afdc9-ca27-4986-9684-d3b8c191a406] Running
	I1019 12:51:26.044401  629457 system_pods.go:89] "kube-proxy-lhths" [3dba9194-393b-4f18-a6e5-057bd803c642] Running
	I1019 12:51:26.044407  629457 system_pods.go:89] "kube-scheduler-old-k8s-version-577062" [12c61412-0e63-4451-8b6d-70992b408f0b] Running
	I1019 12:51:26.044414  629457 system_pods.go:89] "storage-provisioner" [f97edd8d-a3ad-4339-a4c6-99bc764b5534] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:51:26.044563  629457 retry.go:31] will retry after 311.938423ms: missing components: kube-dns
	I1019 12:51:23.545550  641657 out.go:252]   - Booting up control plane ...
	I1019 12:51:23.545687  641657 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 12:51:23.545798  641657 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 12:51:23.545891  641657 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 12:51:23.560129  641657 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 12:51:23.560318  641657 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 12:51:23.567176  641657 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 12:51:23.567498  641657 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 12:51:23.567588  641657 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 12:51:23.670591  641657 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 12:51:23.670789  641657 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 12:51:24.172142  641657 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.703914ms
	I1019 12:51:24.175010  641657 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 12:51:24.175141  641657 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1019 12:51:24.175265  641657 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 12:51:24.175382  641657 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 12:51:26.378015  641657 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.198860026s
	I1019 12:51:26.364576  629457 system_pods.go:86] 8 kube-system pods found
	I1019 12:51:26.364613  629457 system_pods.go:89] "coredns-5dd5756b68-44mqv" [360fd17f-a1ea-4400-85fa-dd78ab44fcbc] Running
	I1019 12:51:26.364694  629457 system_pods.go:89] "etcd-old-k8s-version-577062" [1561017e-3d8c-4abb-b580-ea4eac44212a] Running
	I1019 12:51:26.364727  629457 system_pods.go:89] "kindnet-2h26b" [357fe2d6-42b8-4f53-aa84-9fde0f804ee8] Running
	I1019 12:51:26.364735  629457 system_pods.go:89] "kube-apiserver-old-k8s-version-577062" [836bda6f-5d8c-4bbc-833c-c563da74cbbb] Running
	I1019 12:51:26.364741  629457 system_pods.go:89] "kube-controller-manager-old-k8s-version-577062" [444afdc9-ca27-4986-9684-d3b8c191a406] Running
	I1019 12:51:26.364746  629457 system_pods.go:89] "kube-proxy-lhths" [3dba9194-393b-4f18-a6e5-057bd803c642] Running
	I1019 12:51:26.364751  629457 system_pods.go:89] "kube-scheduler-old-k8s-version-577062" [12c61412-0e63-4451-8b6d-70992b408f0b] Running
	I1019 12:51:26.364755  629457 system_pods.go:89] "storage-provisioner" [f97edd8d-a3ad-4339-a4c6-99bc764b5534] Running
	I1019 12:51:26.364768  629457 system_pods.go:126] duration metric: took 324.075686ms to wait for k8s-apps to be running ...
	I1019 12:51:26.364780  629457 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:51:26.364858  629457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:51:26.387055  629457 system_svc.go:56] duration metric: took 22.263531ms WaitForService to wait for kubelet
	I1019 12:51:26.387085  629457 kubeadm.go:586] duration metric: took 15.296776118s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:51:26.387136  629457 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:51:26.390328  629457 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:51:26.390360  629457 node_conditions.go:123] node cpu capacity is 8
	I1019 12:51:26.390389  629457 node_conditions.go:105] duration metric: took 3.247422ms to run NodePressure ...
	I1019 12:51:26.390405  629457 start.go:241] waiting for startup goroutines ...
	I1019 12:51:26.390432  629457 start.go:246] waiting for cluster config update ...
	I1019 12:51:26.390452  629457 start.go:255] writing updated cluster config ...
	I1019 12:51:26.390787  629457 ssh_runner.go:195] Run: rm -f paused
	I1019 12:51:26.395090  629457 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:51:26.399547  629457 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-44mqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:26.405841  629457 pod_ready.go:94] pod "coredns-5dd5756b68-44mqv" is "Ready"
	I1019 12:51:26.405866  629457 pod_ready.go:86] duration metric: took 6.289636ms for pod "coredns-5dd5756b68-44mqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:26.408963  629457 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:26.414028  629457 pod_ready.go:94] pod "etcd-old-k8s-version-577062" is "Ready"
	I1019 12:51:26.414053  629457 pod_ready.go:86] duration metric: took 5.061531ms for pod "etcd-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:26.418012  629457 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:26.422648  629457 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-577062" is "Ready"
	I1019 12:51:26.422672  629457 pod_ready.go:86] duration metric: took 4.632976ms for pod "kube-apiserver-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:26.425509  629457 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:26.800411  629457 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-577062" is "Ready"
	I1019 12:51:26.800462  629457 pod_ready.go:86] duration metric: took 374.928205ms for pod "kube-controller-manager-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:27.000853  629457 pod_ready.go:83] waiting for pod "kube-proxy-lhths" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:27.399801  629457 pod_ready.go:94] pod "kube-proxy-lhths" is "Ready"
	I1019 12:51:27.399829  629457 pod_ready.go:86] duration metric: took 398.950239ms for pod "kube-proxy-lhths" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:27.602956  629457 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:27.999576  629457 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-577062" is "Ready"
	I1019 12:51:27.999607  629457 pod_ready.go:86] duration metric: took 396.618401ms for pod "kube-scheduler-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:27.999624  629457 pod_ready.go:40] duration metric: took 1.604472782s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:51:28.058935  629457 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1019 12:51:28.061220  629457 out.go:203] 
	W1019 12:51:28.063294  629457 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1019 12:51:28.064603  629457 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1019 12:51:28.070598  629457 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-577062" cluster and "default" namespace by default
	I1019 12:51:26.868142  641657 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.693126231s
	I1019 12:51:28.677227  641657 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.502102082s
	I1019 12:51:28.689441  641657 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 12:51:28.699918  641657 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 12:51:28.709796  641657 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 12:51:28.710110  641657 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-123864 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 12:51:28.719059  641657 kubeadm.go:318] [bootstrap-token] Using token: lusb1a.cbizyiq43r0ijtlp
	W1019 12:51:25.570250  633056 node_ready.go:57] node "no-preload-561408" has "Ready":"False" status (will retry)
	W1019 12:51:28.075585  633056 node_ready.go:57] node "no-preload-561408" has "Ready":"False" status (will retry)
	I1019 12:51:28.720434  641657 out.go:252]   - Configuring RBAC rules ...
	I1019 12:51:28.720580  641657 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 12:51:28.725214  641657 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 12:51:28.730260  641657 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 12:51:28.732743  641657 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 12:51:28.736212  641657 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 12:51:28.738918  641657 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 12:51:29.085371  641657 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 12:51:29.503821  641657 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 12:51:30.083697  641657 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 12:51:30.084876  641657 kubeadm.go:318] 
	I1019 12:51:30.084988  641657 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 12:51:30.084999  641657 kubeadm.go:318] 
	I1019 12:51:30.085094  641657 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 12:51:30.085101  641657 kubeadm.go:318] 
	I1019 12:51:30.085150  641657 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 12:51:30.085238  641657 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 12:51:30.085310  641657 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 12:51:30.085320  641657 kubeadm.go:318] 
	I1019 12:51:30.085407  641657 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 12:51:30.085435  641657 kubeadm.go:318] 
	I1019 12:51:30.085502  641657 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 12:51:30.085511  641657 kubeadm.go:318] 
	I1019 12:51:30.085603  641657 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 12:51:30.085713  641657 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 12:51:30.085817  641657 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 12:51:30.085827  641657 kubeadm.go:318] 
	I1019 12:51:30.085957  641657 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 12:51:30.086085  641657 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 12:51:30.086093  641657 kubeadm.go:318] 
	I1019 12:51:30.086207  641657 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token lusb1a.cbizyiq43r0ijtlp \
	I1019 12:51:30.086359  641657 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 \
	I1019 12:51:30.086398  641657 kubeadm.go:318] 	--control-plane 
	I1019 12:51:30.086409  641657 kubeadm.go:318] 
	I1019 12:51:30.086540  641657 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 12:51:30.086549  641657 kubeadm.go:318] 
	I1019 12:51:30.086678  641657 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token lusb1a.cbizyiq43r0ijtlp \
	I1019 12:51:30.086825  641657 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 
	I1019 12:51:30.089637  641657 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 12:51:30.089806  641657 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 12:51:30.089825  641657 cni.go:84] Creating CNI manager for ""
	I1019 12:51:30.089835  641657 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:51:30.091801  641657 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 12:51:30.093411  641657 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 12:51:30.098233  641657 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 12:51:30.098249  641657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 12:51:30.112329  641657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1019 12:51:30.353864  641657 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:51:30.354028  641657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-123864 minikube.k8s.io/updated_at=2025_10_19T12_51_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=embed-certs-123864 minikube.k8s.io/primary=true
	I1019 12:51:30.354044  641657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:30.365849  641657 ops.go:34] apiserver oom_adj: -16
	I1019 12:51:30.438216  641657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:30.938627  641657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:31.439013  641657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1019 12:51:30.570418  633056 node_ready.go:57] node "no-preload-561408" has "Ready":"False" status (will retry)
	W1019 12:51:33.069915  633056 node_ready.go:57] node "no-preload-561408" has "Ready":"False" status (will retry)
	I1019 12:51:31.939255  641657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:32.438393  641657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:32.938388  641657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:33.438561  641657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:33.938280  641657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:34.438564  641657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:51:34.519418  641657 kubeadm.go:1113] duration metric: took 4.165455779s to wait for elevateKubeSystemPrivileges
	I1019 12:51:34.519497  641657 kubeadm.go:402] duration metric: took 15.93327514s to StartCluster
	I1019 12:51:34.519525  641657 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:34.519598  641657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:51:34.522326  641657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:34.522611  641657 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:51:34.522643  641657 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:51:34.522752  641657 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-123864"
	I1019 12:51:34.522772  641657 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-123864"
	I1019 12:51:34.522791  641657 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:51:34.522808  641657 host.go:66] Checking if "embed-certs-123864" exists ...
	I1019 12:51:34.522814  641657 addons.go:69] Setting default-storageclass=true in profile "embed-certs-123864"
	I1019 12:51:34.522628  641657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:51:34.522886  641657 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-123864"
	I1019 12:51:34.523474  641657 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:51:34.523936  641657 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:51:34.524721  641657 out.go:179] * Verifying Kubernetes components...
	I1019 12:51:34.526778  641657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:51:34.553280  641657 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:51:34.553906  641657 addons.go:238] Setting addon default-storageclass=true in "embed-certs-123864"
	I1019 12:51:34.553955  641657 host.go:66] Checking if "embed-certs-123864" exists ...
	I1019 12:51:34.554598  641657 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:51:34.554832  641657 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:51:34.554851  641657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:51:34.554898  641657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:51:34.585841  641657 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:51:34.585865  641657 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:51:34.585924  641657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:51:34.589729  641657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:51:34.618132  641657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:51:34.691345  641657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:51:34.730513  641657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:51:34.733217  641657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:51:34.765226  641657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:51:34.908184  641657 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1019 12:51:35.115085  641657 node_ready.go:35] waiting up to 6m0s for node "embed-certs-123864" to be "Ready" ...
	I1019 12:51:35.124805  641657 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 12:51:35.126978  641657 addons.go:514] duration metric: took 604.326902ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 12:51:35.414537  641657 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-123864" context rescaled to 1 replicas
	I1019 12:51:34.573701  633056 node_ready.go:49] node "no-preload-561408" is "Ready"
	I1019 12:51:34.573742  633056 node_ready.go:38] duration metric: took 13.507126751s for node "no-preload-561408" to be "Ready" ...
	I1019 12:51:34.573761  633056 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:51:34.573819  633056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:51:34.599463  633056 api_server.go:72] duration metric: took 13.877337942s to wait for apiserver process to appear ...
	I1019 12:51:34.599500  633056 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:51:34.599527  633056 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:51:34.608205  633056 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1019 12:51:34.610364  633056 api_server.go:141] control plane version: v1.34.1
	I1019 12:51:34.610518  633056 api_server.go:131] duration metric: took 11.005266ms to wait for apiserver health ...
	I1019 12:51:34.610613  633056 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:51:34.626576  633056 system_pods.go:59] 8 kube-system pods found
	I1019 12:51:34.626624  633056 system_pods.go:61] "coredns-66bc5c9577-pgxlp" [af0816b7-b4de-4d64-a4bb-0efbc821bb53] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:51:34.626637  633056 system_pods.go:61] "etcd-no-preload-561408" [0d036058-49c8-4176-b416-ed28089e7035] Running
	I1019 12:51:34.626645  633056 system_pods.go:61] "kindnet-kq4cq" [1e5712d3-d393-4b98-8346-442229d87b07] Running
	I1019 12:51:34.626651  633056 system_pods.go:61] "kube-apiserver-no-preload-561408" [83625aff-bb50-4376-b99f-b4a252a21b0c] Running
	I1019 12:51:34.626656  633056 system_pods.go:61] "kube-controller-manager-no-preload-561408" [da4db941-5094-47df-9cdf-ace923ff41ef] Running
	I1019 12:51:34.626662  633056 system_pods.go:61] "kube-proxy-lppwp" [cf6aee53-b434-4009-aeb6-36cb62fc0769] Running
	I1019 12:51:34.626667  633056 system_pods.go:61] "kube-scheduler-no-preload-561408" [55552cd1-c6f1-4b76-9b51-c78a1c7aac05] Running
	I1019 12:51:34.626674  633056 system_pods.go:61] "storage-provisioner" [e8c92cd5-cb77-4b3d-bc5a-20b606b8794d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:51:34.626683  633056 system_pods.go:74] duration metric: took 16.03748ms to wait for pod list to return data ...
	I1019 12:51:34.626700  633056 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:51:34.633325  633056 default_sa.go:45] found service account: "default"
	I1019 12:51:34.633506  633056 default_sa.go:55] duration metric: took 6.793546ms for default service account to be created ...
	I1019 12:51:34.633558  633056 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:51:34.724879  633056 system_pods.go:86] 8 kube-system pods found
	I1019 12:51:34.725530  633056 system_pods.go:89] "coredns-66bc5c9577-pgxlp" [af0816b7-b4de-4d64-a4bb-0efbc821bb53] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:51:34.725541  633056 system_pods.go:89] "etcd-no-preload-561408" [0d036058-49c8-4176-b416-ed28089e7035] Running
	I1019 12:51:34.725551  633056 system_pods.go:89] "kindnet-kq4cq" [1e5712d3-d393-4b98-8346-442229d87b07] Running
	I1019 12:51:34.725655  633056 system_pods.go:89] "kube-apiserver-no-preload-561408" [83625aff-bb50-4376-b99f-b4a252a21b0c] Running
	I1019 12:51:34.725663  633056 system_pods.go:89] "kube-controller-manager-no-preload-561408" [da4db941-5094-47df-9cdf-ace923ff41ef] Running
	I1019 12:51:34.725669  633056 system_pods.go:89] "kube-proxy-lppwp" [cf6aee53-b434-4009-aeb6-36cb62fc0769] Running
	I1019 12:51:34.725675  633056 system_pods.go:89] "kube-scheduler-no-preload-561408" [55552cd1-c6f1-4b76-9b51-c78a1c7aac05] Running
	I1019 12:51:34.725684  633056 system_pods.go:89] "storage-provisioner" [e8c92cd5-cb77-4b3d-bc5a-20b606b8794d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:51:34.725759  633056 retry.go:31] will retry after 253.641184ms: missing components: kube-dns
	I1019 12:51:34.984098  633056 system_pods.go:86] 8 kube-system pods found
	I1019 12:51:34.984139  633056 system_pods.go:89] "coredns-66bc5c9577-pgxlp" [af0816b7-b4de-4d64-a4bb-0efbc821bb53] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:51:34.984147  633056 system_pods.go:89] "etcd-no-preload-561408" [0d036058-49c8-4176-b416-ed28089e7035] Running
	I1019 12:51:34.984156  633056 system_pods.go:89] "kindnet-kq4cq" [1e5712d3-d393-4b98-8346-442229d87b07] Running
	I1019 12:51:34.984161  633056 system_pods.go:89] "kube-apiserver-no-preload-561408" [83625aff-bb50-4376-b99f-b4a252a21b0c] Running
	I1019 12:51:34.984167  633056 system_pods.go:89] "kube-controller-manager-no-preload-561408" [da4db941-5094-47df-9cdf-ace923ff41ef] Running
	I1019 12:51:34.984172  633056 system_pods.go:89] "kube-proxy-lppwp" [cf6aee53-b434-4009-aeb6-36cb62fc0769] Running
	I1019 12:51:34.984180  633056 system_pods.go:89] "kube-scheduler-no-preload-561408" [55552cd1-c6f1-4b76-9b51-c78a1c7aac05] Running
	I1019 12:51:34.984205  633056 system_pods.go:89] "storage-provisioner" [e8c92cd5-cb77-4b3d-bc5a-20b606b8794d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:51:34.984229  633056 retry.go:31] will retry after 327.325172ms: missing components: kube-dns
	I1019 12:51:35.316187  633056 system_pods.go:86] 8 kube-system pods found
	I1019 12:51:35.316216  633056 system_pods.go:89] "coredns-66bc5c9577-pgxlp" [af0816b7-b4de-4d64-a4bb-0efbc821bb53] Running
	I1019 12:51:35.316221  633056 system_pods.go:89] "etcd-no-preload-561408" [0d036058-49c8-4176-b416-ed28089e7035] Running
	I1019 12:51:35.316224  633056 system_pods.go:89] "kindnet-kq4cq" [1e5712d3-d393-4b98-8346-442229d87b07] Running
	I1019 12:51:35.316228  633056 system_pods.go:89] "kube-apiserver-no-preload-561408" [83625aff-bb50-4376-b99f-b4a252a21b0c] Running
	I1019 12:51:35.316231  633056 system_pods.go:89] "kube-controller-manager-no-preload-561408" [da4db941-5094-47df-9cdf-ace923ff41ef] Running
	I1019 12:51:35.316235  633056 system_pods.go:89] "kube-proxy-lppwp" [cf6aee53-b434-4009-aeb6-36cb62fc0769] Running
	I1019 12:51:35.316238  633056 system_pods.go:89] "kube-scheduler-no-preload-561408" [55552cd1-c6f1-4b76-9b51-c78a1c7aac05] Running
	I1019 12:51:35.316240  633056 system_pods.go:89] "storage-provisioner" [e8c92cd5-cb77-4b3d-bc5a-20b606b8794d] Running
	I1019 12:51:35.316249  633056 system_pods.go:126] duration metric: took 682.673546ms to wait for k8s-apps to be running ...
	I1019 12:51:35.316274  633056 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:51:35.316331  633056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:51:35.330106  633056 system_svc.go:56] duration metric: took 13.836436ms WaitForService to wait for kubelet
	I1019 12:51:35.330139  633056 kubeadm.go:586] duration metric: took 14.608022242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:51:35.330155  633056 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:51:35.335500  633056 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:51:35.335532  633056 node_conditions.go:123] node cpu capacity is 8
	I1019 12:51:35.335547  633056 node_conditions.go:105] duration metric: took 5.386798ms to run NodePressure ...
	I1019 12:51:35.335561  633056 start.go:241] waiting for startup goroutines ...
	I1019 12:51:35.335568  633056 start.go:246] waiting for cluster config update ...
	I1019 12:51:35.335578  633056 start.go:255] writing updated cluster config ...
	I1019 12:51:35.335824  633056 ssh_runner.go:195] Run: rm -f paused
	I1019 12:51:35.340215  633056 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:51:35.344324  633056 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pgxlp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:35.348514  633056 pod_ready.go:94] pod "coredns-66bc5c9577-pgxlp" is "Ready"
	I1019 12:51:35.348536  633056 pod_ready.go:86] duration metric: took 4.19267ms for pod "coredns-66bc5c9577-pgxlp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:35.350283  633056 pod_ready.go:83] waiting for pod "etcd-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:35.353725  633056 pod_ready.go:94] pod "etcd-no-preload-561408" is "Ready"
	I1019 12:51:35.353749  633056 pod_ready.go:86] duration metric: took 3.446755ms for pod "etcd-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:35.355377  633056 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:35.358641  633056 pod_ready.go:94] pod "kube-apiserver-no-preload-561408" is "Ready"
	I1019 12:51:35.358660  633056 pod_ready.go:86] duration metric: took 3.266509ms for pod "kube-apiserver-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:35.360301  633056 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:35.745011  633056 pod_ready.go:94] pod "kube-controller-manager-no-preload-561408" is "Ready"
	I1019 12:51:35.745036  633056 pod_ready.go:86] duration metric: took 384.716802ms for pod "kube-controller-manager-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:35.945378  633056 pod_ready.go:83] waiting for pod "kube-proxy-lppwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:36.345720  633056 pod_ready.go:94] pod "kube-proxy-lppwp" is "Ready"
	I1019 12:51:36.345751  633056 pod_ready.go:86] duration metric: took 400.348827ms for pod "kube-proxy-lppwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:36.545184  633056 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:36.945783  633056 pod_ready.go:94] pod "kube-scheduler-no-preload-561408" is "Ready"
	I1019 12:51:36.945814  633056 pod_ready.go:86] duration metric: took 400.599473ms for pod "kube-scheduler-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:51:36.945843  633056 pod_ready.go:40] duration metric: took 1.605592497s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:51:37.001002  633056 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:51:37.003733  633056 out.go:179] * Done! kubectl is now configured to use "no-preload-561408" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:51:25 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:25.973717357Z" level=info msg="Starting container: 62e4498c98ebc734bd2e3f01e63e7843ebd06883728796baa196d0c346e664c7" id=1cd9c1ec-714b-4bb8-b427-6e71586efb65 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:51:25 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:25.976516416Z" level=info msg="Started container" PID=2123 containerID=62e4498c98ebc734bd2e3f01e63e7843ebd06883728796baa196d0c346e664c7 description=kube-system/coredns-5dd5756b68-44mqv/coredns id=1cd9c1ec-714b-4bb8-b427-6e71586efb65 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a9cd1817994fdafdddeb06e1782936d9b5c5c06a6fb3b75e6b741cc08796cf44
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.566240214Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8b8785da-c780-464b-80dc-36069c068b3d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.566344067Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.572725737Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:12c7a43cd2f39f9df1dc6248a095687e30e8d21ea9afb4057d17c15e106912d5 UID:e374ff62-1a16-4b52-84da-3d26c90172cf NetNS:/var/run/netns/4774cb83-8ad5-4574-9ae2-373afe68a61e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004a2dd8}] Aliases:map[]}"
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.572754174Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.583708429Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:12c7a43cd2f39f9df1dc6248a095687e30e8d21ea9afb4057d17c15e106912d5 UID:e374ff62-1a16-4b52-84da-3d26c90172cf NetNS:/var/run/netns/4774cb83-8ad5-4574-9ae2-373afe68a61e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004a2dd8}] Aliases:map[]}"
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.583851056Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.584618969Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.585508482Z" level=info msg="Ran pod sandbox 12c7a43cd2f39f9df1dc6248a095687e30e8d21ea9afb4057d17c15e106912d5 with infra container: default/busybox/POD" id=8b8785da-c780-464b-80dc-36069c068b3d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.586845591Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=11dbd767-0482-4a7c-a28b-a2f3821912df name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.586988318Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=11dbd767-0482-4a7c-a28b-a2f3821912df name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.587037364Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=11dbd767-0482-4a7c-a28b-a2f3821912df name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.587567559Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0a4ac740-acaf-40e1-8dd1-dddc9b80e9b0 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:51:28 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:28.591393508Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 12:51:29 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:29.333967024Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0a4ac740-acaf-40e1-8dd1-dddc9b80e9b0 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:51:29 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:29.334776441Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0cc6105b-2885-42d1-b72c-f7dd916ca0f0 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:51:29 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:29.336797975Z" level=info msg="Creating container: default/busybox/busybox" id=882b1631-3d4d-48b6-9443-c54b3efec41e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:51:29 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:29.338169558Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:51:29 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:29.34296031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:51:29 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:29.343547117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:51:29 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:29.389724802Z" level=info msg="Created container e7c58156f5b141caec3ed4c49f0deb138121ec86a929e1121a21288bef1319f5: default/busybox/busybox" id=882b1631-3d4d-48b6-9443-c54b3efec41e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:51:29 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:29.391578064Z" level=info msg="Starting container: e7c58156f5b141caec3ed4c49f0deb138121ec86a929e1121a21288bef1319f5" id=4988c201-dc1a-41e8-affc-65c13924870c name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:51:29 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:29.39384297Z" level=info msg="Started container" PID=2198 containerID=e7c58156f5b141caec3ed4c49f0deb138121ec86a929e1121a21288bef1319f5 description=default/busybox/busybox id=4988c201-dc1a-41e8-affc-65c13924870c name=/runtime.v1.RuntimeService/StartContainer sandboxID=12c7a43cd2f39f9df1dc6248a095687e30e8d21ea9afb4057d17c15e106912d5
	Oct 19 12:51:37 old-k8s-version-577062 crio[772]: time="2025-10-19T12:51:37.365745477Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	e7c58156f5b14       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   12c7a43cd2f39       busybox                                          default
	62e4498c98ebc       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      12 seconds ago      Running             coredns                   0                   a9cd1817994fd       coredns-5dd5756b68-44mqv                         kube-system
	d0ae3f6cc1f00       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   38c73eaa025a4       storage-provisioner                              kube-system
	1b18d5cc3e5cf       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   cd7f1c69a669d       kindnet-2h26b                                    kube-system
	3c441058f0a0f       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   b6b070c8d2258       kube-proxy-lhths                                 kube-system
	f9e3535c44fe2       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      46 seconds ago      Running             kube-apiserver            0                   7a5401a71e509       kube-apiserver-old-k8s-version-577062            kube-system
	2a8c1af9c9f49       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      46 seconds ago      Running             etcd                      0                   c9357ae8e7d46       etcd-old-k8s-version-577062                      kube-system
	b5738e722f2e9       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      46 seconds ago      Running             kube-controller-manager   0                   ce952f2540c0b       kube-controller-manager-old-k8s-version-577062   kube-system
	a0ec3b493e284       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      46 seconds ago      Running             kube-scheduler            0                   be705564dcbd3       kube-scheduler-old-k8s-version-577062            kube-system
	
	
	==> coredns [62e4498c98ebc734bd2e3f01e63e7843ebd06883728796baa196d0c346e664c7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48096 - 36680 "HINFO IN 5564359061467796205.3223520044331946881. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.467832015s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-577062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-577062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=old-k8s-version-577062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_50_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:50:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-577062
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:51:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:51:28 +0000   Sun, 19 Oct 2025 12:50:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:51:28 +0000   Sun, 19 Oct 2025 12:50:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:51:28 +0000   Sun, 19 Oct 2025 12:50:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:51:28 +0000   Sun, 19 Oct 2025 12:51:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-577062
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                bfa1b0a1-e61a-4552-82c8-d6cc29922f2a
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-44mqv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-577062                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-2h26b                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-577062             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-577062    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-lhths                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-577062             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-577062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-577062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-577062 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node old-k8s-version-577062 event: Registered Node old-k8s-version-577062 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-577062 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [2a8c1af9c9f492e73a975315ba7df14c9f767dc5ca9d88e012f07437928ebda1] <==
	{"level":"info","ts":"2025-10-19T12:51:11.593613Z","caller":"traceutil/trace.go:171","msg":"trace[1447068068] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:332; }","duration":"137.140966ms","start":"2025-10-19T12:51:11.456463Z","end":"2025-10-19T12:51:11.593604Z","steps":["trace[1447068068] 'agreement among raft nodes before linearized reading'  (duration: 137.037403ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:51:11.593747Z","caller":"traceutil/trace.go:171","msg":"trace[453930989] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"231.070157ms","start":"2025-10-19T12:51:11.362669Z","end":"2025-10-19T12:51:11.593739Z","steps":["trace[453930989] 'process raft request'  (duration: 230.789477ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:51:11.593751Z","caller":"traceutil/trace.go:171","msg":"trace[1139278341] transaction","detail":"{read_only:false; response_revision:330; number_of_response:1; }","duration":"231.494165ms","start":"2025-10-19T12:51:11.36225Z","end":"2025-10-19T12:51:11.593744Z","steps":["trace[1139278341] 'process raft request'  (duration: 230.167364ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:51:11.72176Z","caller":"traceutil/trace.go:171","msg":"trace[750751747] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"117.247058ms","start":"2025-10-19T12:51:11.604493Z","end":"2025-10-19T12:51:11.72174Z","steps":["trace[750751747] 'process raft request'  (duration: 114.561652ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:51:11.721898Z","caller":"traceutil/trace.go:171","msg":"trace[417129554] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"116.342549ms","start":"2025-10-19T12:51:11.605543Z","end":"2025-10-19T12:51:11.721886Z","steps":["trace[417129554] 'process raft request'  (duration: 115.754863ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T12:51:11.976485Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.840733ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789436947534887 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:312 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:3970 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-19T12:51:11.976715Z","caller":"traceutil/trace.go:171","msg":"trace[152284839] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"236.930958ms","start":"2025-10-19T12:51:11.739762Z","end":"2025-10-19T12:51:11.976693Z","steps":["trace[152284839] 'process raft request'  (duration: 86.801163ms)","trace[152284839] 'compare'  (duration: 149.736619ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T12:51:11.977019Z","caller":"traceutil/trace.go:171","msg":"trace[2010477276] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"190.856712ms","start":"2025-10-19T12:51:11.786149Z","end":"2025-10-19T12:51:11.977006Z","steps":["trace[2010477276] 'process raft request'  (duration: 190.518394ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:51:11.977061Z","caller":"traceutil/trace.go:171","msg":"trace[1343830888] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"234.04954ms","start":"2025-10-19T12:51:11.743001Z","end":"2025-10-19T12:51:11.977051Z","steps":["trace[1343830888] 'process raft request'  (duration: 233.605187ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:51:11.977152Z","caller":"traceutil/trace.go:171","msg":"trace[1063054929] linearizableReadLoop","detail":"{readStateIndex:351; appliedIndex:348; }","duration":"182.000091ms","start":"2025-10-19T12:51:11.795145Z","end":"2025-10-19T12:51:11.977145Z","steps":["trace[1063054929] 'read index received'  (duration: 31.428839ms)","trace[1063054929] 'applied index is now lower than readState.Index'  (duration: 150.570482ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T12:51:11.977204Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.081858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T12:51:11.977222Z","caller":"traceutil/trace.go:171","msg":"trace[556273111] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:339; }","duration":"182.107399ms","start":"2025-10-19T12:51:11.795109Z","end":"2025-10-19T12:51:11.977216Z","steps":["trace[556273111] 'agreement among raft nodes before linearized reading'  (duration: 182.060804ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:51:12.123544Z","caller":"traceutil/trace.go:171","msg":"trace[1918636204] linearizableReadLoop","detail":"{readStateIndex:354; appliedIndex:353; }","duration":"111.972006ms","start":"2025-10-19T12:51:12.01155Z","end":"2025-10-19T12:51:12.123522Z","steps":["trace[1918636204] 'read index received'  (duration: 66.426595ms)","trace[1918636204] 'applied index is now lower than readState.Index'  (duration: 45.544279ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T12:51:12.123737Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.195625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-577062\" ","response":"range_response_count:1 size:5711"}
	{"level":"info","ts":"2025-10-19T12:51:12.124354Z","caller":"traceutil/trace.go:171","msg":"trace[927597957] range","detail":"{range_begin:/registry/minions/old-k8s-version-577062; range_end:; response_count:1; response_revision:341; }","duration":"112.821818ms","start":"2025-10-19T12:51:12.011517Z","end":"2025-10-19T12:51:12.124339Z","steps":["trace[927597957] 'agreement among raft nodes before linearized reading'  (duration: 112.150638ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T12:51:12.124713Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.049924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2025-10-19T12:51:12.124763Z","caller":"traceutil/trace.go:171","msg":"trace[1602827645] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; response_count:1; response_revision:341; }","duration":"113.109122ms","start":"2025-10-19T12:51:12.011644Z","end":"2025-10-19T12:51:12.124753Z","steps":["trace[1602827645] 'agreement among raft nodes before linearized reading'  (duration: 112.998892ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:51:12.123812Z","caller":"traceutil/trace.go:171","msg":"trace[322863832] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"120.723636ms","start":"2025-10-19T12:51:12.00307Z","end":"2025-10-19T12:51:12.123793Z","steps":["trace[322863832] 'process raft request'  (duration: 74.961143ms)","trace[322863832] 'compare'  (duration: 45.2915ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T12:51:12.125101Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.319013ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4034"}
	{"level":"info","ts":"2025-10-19T12:51:12.125132Z","caller":"traceutil/trace.go:171","msg":"trace[1899165488] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:341; }","duration":"113.36032ms","start":"2025-10-19T12:51:12.011763Z","end":"2025-10-19T12:51:12.125123Z","steps":["trace[1899165488] 'agreement among raft nodes before linearized reading'  (duration: 113.287727ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:51:12.261536Z","caller":"traceutil/trace.go:171","msg":"trace[548133208] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"108.686492ms","start":"2025-10-19T12:51:12.152821Z","end":"2025-10-19T12:51:12.261507Z","steps":["trace[548133208] 'process raft request'  (duration: 87.741417ms)","trace[548133208] 'compare'  (duration: 20.713927ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T12:51:12.429691Z","caller":"traceutil/trace.go:171","msg":"trace[1096298429] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"159.654338ms","start":"2025-10-19T12:51:12.270011Z","end":"2025-10-19T12:51:12.429665Z","steps":["trace[1096298429] 'process raft request'  (duration: 145.311956ms)","trace[1096298429] 'compare'  (duration: 14.219349ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T12:51:12.432413Z","caller":"traceutil/trace.go:171","msg":"trace[509609660] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"158.77029ms","start":"2025-10-19T12:51:12.273625Z","end":"2025-10-19T12:51:12.432395Z","steps":["trace[509609660] 'process raft request'  (duration: 158.732822ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:51:12.432673Z","caller":"traceutil/trace.go:171","msg":"trace[347175854] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"161.054288ms","start":"2025-10-19T12:51:12.271591Z","end":"2025-10-19T12:51:12.432645Z","steps":["trace[347175854] 'process raft request'  (duration: 160.632318ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:51:12.433167Z","caller":"traceutil/trace.go:171","msg":"trace[1277727341] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"161.07027ms","start":"2025-10-19T12:51:12.272077Z","end":"2025-10-19T12:51:12.433147Z","steps":["trace[1277727341] 'process raft request'  (duration: 160.235559ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:51:38 up  2:34,  0 user,  load average: 8.49, 5.30, 3.08
	Linux old-k8s-version-577062 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b18d5cc3e5cf9168400b1eff6ea3043cb88af4b7185034fd2a421c00ae4201c] <==
	I1019 12:51:15.090563       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:51:15.090794       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1019 12:51:15.090923       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:51:15.090937       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:51:15.090958       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:51:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:51:15.385924       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:51:15.385987       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:51:15.386006       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:51:15.386450       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:51:15.686171       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:51:15.686200       1 metrics.go:72] Registering metrics
	I1019 12:51:15.686281       1 controller.go:711] "Syncing nftables rules"
	I1019 12:51:25.393975       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 12:51:25.394033       1 main.go:301] handling current node
	I1019 12:51:35.389555       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 12:51:35.389612       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f9e3535c44fe200383e323645c4395896f90b8c6144b5d7b2ae74a4a9eede0ab] <==
	I1019 12:50:54.967910       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 12:50:54.967920       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1019 12:50:54.968082       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1019 12:50:54.968101       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1019 12:50:54.968189       1 aggregator.go:166] initial CRD sync complete...
	I1019 12:50:54.968218       1 autoregister_controller.go:141] Starting autoregister controller
	I1019 12:50:54.968240       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 12:50:54.968264       1 cache.go:39] Caches are synced for autoregister controller
	I1019 12:50:54.969766       1 controller.go:624] quota admission added evaluator for: namespaces
	I1019 12:50:55.009708       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:50:55.874456       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 12:50:55.878774       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 12:50:55.878795       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:50:56.345489       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:50:56.385107       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:50:56.479524       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 12:50:56.485341       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1019 12:50:56.486333       1 controller.go:624] quota admission added evaluator for: endpoints
	I1019 12:50:56.491684       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:50:56.900154       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1019 12:50:58.271054       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1019 12:50:58.284009       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 12:50:58.296848       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1019 12:51:10.490223       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1019 12:51:10.646048       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b5738e722f2e933d7308bdebcb5c0a0a36f0ea651e92826eb4ffd97ce7a9899b] <==
	I1019 12:51:09.845744       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1019 12:51:09.905769       1 shared_informer.go:318] Caches are synced for persistent volume
	I1019 12:51:09.944052       1 shared_informer.go:318] Caches are synced for attach detach
	I1019 12:51:10.281495       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 12:51:10.337869       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 12:51:10.337903       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1019 12:51:10.634562       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1019 12:51:11.176074       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-44mqv"
	I1019 12:51:11.184559       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2h26b"
	I1019 12:51:11.185296       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lhths"
	I1019 12:51:11.347286       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-lkcln"
	I1019 12:51:11.600417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="966.221766ms"
	I1019 12:51:11.728637       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.125619ms"
	I1019 12:51:11.728734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.128µs"
	I1019 12:51:12.265487       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1019 12:51:12.453759       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-lkcln"
	I1019 12:51:12.480181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="216.075551ms"
	I1019 12:51:12.534020       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.628606ms"
	I1019 12:51:12.534156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.782µs"
	I1019 12:51:25.614616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.916µs"
	I1019 12:51:25.631539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.934µs"
	I1019 12:51:26.326027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.928µs"
	I1019 12:51:26.362337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.799528ms"
	I1019 12:51:26.363000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="169.039µs"
	I1019 12:51:29.757617       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [3c441058f0a0fbc80744b88d4dcdbb002abb180ac3b0cbee67432a38a1717ba7] <==
	I1019 12:51:12.601768       1 server_others.go:69] "Using iptables proxy"
	I1019 12:51:12.622621       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1019 12:51:12.660614       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:51:12.664799       1 server_others.go:152] "Using iptables Proxier"
	I1019 12:51:12.664849       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1019 12:51:12.664859       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1019 12:51:12.664899       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1019 12:51:12.668037       1 server.go:846] "Version info" version="v1.28.0"
	I1019 12:51:12.668239       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:51:12.669277       1 config.go:188] "Starting service config controller"
	I1019 12:51:12.671555       1 config.go:315] "Starting node config controller"
	I1019 12:51:12.674094       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1019 12:51:12.671658       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1019 12:51:12.669932       1 config.go:97] "Starting endpoint slice config controller"
	I1019 12:51:12.674407       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1019 12:51:12.674640       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1019 12:51:12.774551       1 shared_informer.go:318] Caches are synced for service config
	I1019 12:51:12.774747       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a0ec3b493e284d565478be6d617fe4f03989c039bbb829c83e860a91f414b0de] <==
	E1019 12:50:54.938009       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1019 12:50:54.938023       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1019 12:50:54.938058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1019 12:50:54.938070       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1019 12:50:54.938159       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1019 12:50:54.938194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1019 12:50:54.938533       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1019 12:50:54.938557       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1019 12:50:54.938682       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1019 12:50:54.938707       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1019 12:50:56.060334       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1019 12:50:56.060380       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1019 12:50:56.070952       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1019 12:50:56.071001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1019 12:50:56.071800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1019 12:50:56.071831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1019 12:50:56.076407       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1019 12:50:56.076471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1019 12:50:56.120054       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1019 12:50:56.120130       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 12:50:56.170596       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1019 12:50:56.170816       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1019 12:50:56.177692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1019 12:50:56.177733       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1019 12:50:59.234845       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 12:51:11 old-k8s-version-577062 kubelet[1392]: W1019 12:51:11.348481    1392 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-577062" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-577062' and this object
	Oct 19 12:51:11 old-k8s-version-577062 kubelet[1392]: E1019 12:51:11.348571    1392 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-577062" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-577062' and this object
	Oct 19 12:51:11 old-k8s-version-577062 kubelet[1392]: I1019 12:51:11.350181    1392 topology_manager.go:215] "Topology Admit Handler" podUID="357fe2d6-42b8-4f53-aa84-9fde0f804ee8" podNamespace="kube-system" podName="kindnet-2h26b"
	Oct 19 12:51:11 old-k8s-version-577062 kubelet[1392]: I1019 12:51:11.393440    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn555\" (UniqueName: \"kubernetes.io/projected/357fe2d6-42b8-4f53-aa84-9fde0f804ee8-kube-api-access-jn555\") pod \"kindnet-2h26b\" (UID: \"357fe2d6-42b8-4f53-aa84-9fde0f804ee8\") " pod="kube-system/kindnet-2h26b"
	Oct 19 12:51:11 old-k8s-version-577062 kubelet[1392]: I1019 12:51:11.393541    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3dba9194-393b-4f18-a6e5-057bd803c642-lib-modules\") pod \"kube-proxy-lhths\" (UID: \"3dba9194-393b-4f18-a6e5-057bd803c642\") " pod="kube-system/kube-proxy-lhths"
	Oct 19 12:51:11 old-k8s-version-577062 kubelet[1392]: I1019 12:51:11.393576    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/357fe2d6-42b8-4f53-aa84-9fde0f804ee8-xtables-lock\") pod \"kindnet-2h26b\" (UID: \"357fe2d6-42b8-4f53-aa84-9fde0f804ee8\") " pod="kube-system/kindnet-2h26b"
	Oct 19 12:51:11 old-k8s-version-577062 kubelet[1392]: I1019 12:51:11.393615    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3dba9194-393b-4f18-a6e5-057bd803c642-xtables-lock\") pod \"kube-proxy-lhths\" (UID: \"3dba9194-393b-4f18-a6e5-057bd803c642\") " pod="kube-system/kube-proxy-lhths"
	Oct 19 12:51:11 old-k8s-version-577062 kubelet[1392]: I1019 12:51:11.393655    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3dba9194-393b-4f18-a6e5-057bd803c642-kube-proxy\") pod \"kube-proxy-lhths\" (UID: \"3dba9194-393b-4f18-a6e5-057bd803c642\") " pod="kube-system/kube-proxy-lhths"
	Oct 19 12:51:11 old-k8s-version-577062 kubelet[1392]: I1019 12:51:11.393713    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9257d\" (UniqueName: \"kubernetes.io/projected/3dba9194-393b-4f18-a6e5-057bd803c642-kube-api-access-9257d\") pod \"kube-proxy-lhths\" (UID: \"3dba9194-393b-4f18-a6e5-057bd803c642\") " pod="kube-system/kube-proxy-lhths"
	Oct 19 12:51:11 old-k8s-version-577062 kubelet[1392]: I1019 12:51:11.394579    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/357fe2d6-42b8-4f53-aa84-9fde0f804ee8-cni-cfg\") pod \"kindnet-2h26b\" (UID: \"357fe2d6-42b8-4f53-aa84-9fde0f804ee8\") " pod="kube-system/kindnet-2h26b"
	Oct 19 12:51:11 old-k8s-version-577062 kubelet[1392]: I1019 12:51:11.394681    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/357fe2d6-42b8-4f53-aa84-9fde0f804ee8-lib-modules\") pod \"kindnet-2h26b\" (UID: \"357fe2d6-42b8-4f53-aa84-9fde0f804ee8\") " pod="kube-system/kindnet-2h26b"
	Oct 19 12:51:13 old-k8s-version-577062 kubelet[1392]: I1019 12:51:13.300510    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lhths" podStartSLOduration=2.300401403 podCreationTimestamp="2025-10-19 12:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:13.30007408 +0000 UTC m=+15.188556392" watchObservedRunningTime="2025-10-19 12:51:13.300401403 +0000 UTC m=+15.188883724"
	Oct 19 12:51:15 old-k8s-version-577062 kubelet[1392]: I1019 12:51:15.297763    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2h26b" podStartSLOduration=1.8523828629999999 podCreationTimestamp="2025-10-19 12:51:11 +0000 UTC" firstStartedPulling="2025-10-19 12:51:12.444664555 +0000 UTC m=+14.333146858" lastFinishedPulling="2025-10-19 12:51:14.889989355 +0000 UTC m=+16.778471666" observedRunningTime="2025-10-19 12:51:15.297519338 +0000 UTC m=+17.186001668" watchObservedRunningTime="2025-10-19 12:51:15.297707671 +0000 UTC m=+17.186189982"
	Oct 19 12:51:25 old-k8s-version-577062 kubelet[1392]: I1019 12:51:25.590876    1392 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 19 12:51:25 old-k8s-version-577062 kubelet[1392]: I1019 12:51:25.613609    1392 topology_manager.go:215] "Topology Admit Handler" podUID="f97edd8d-a3ad-4339-a4c6-99bc764b5534" podNamespace="kube-system" podName="storage-provisioner"
	Oct 19 12:51:25 old-k8s-version-577062 kubelet[1392]: I1019 12:51:25.614602    1392 topology_manager.go:215] "Topology Admit Handler" podUID="360fd17f-a1ea-4400-85fa-dd78ab44fcbc" podNamespace="kube-system" podName="coredns-5dd5756b68-44mqv"
	Oct 19 12:51:25 old-k8s-version-577062 kubelet[1392]: I1019 12:51:25.699843    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f97edd8d-a3ad-4339-a4c6-99bc764b5534-tmp\") pod \"storage-provisioner\" (UID: \"f97edd8d-a3ad-4339-a4c6-99bc764b5534\") " pod="kube-system/storage-provisioner"
	Oct 19 12:51:25 old-k8s-version-577062 kubelet[1392]: I1019 12:51:25.699902    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwktv\" (UniqueName: \"kubernetes.io/projected/f97edd8d-a3ad-4339-a4c6-99bc764b5534-kube-api-access-nwktv\") pod \"storage-provisioner\" (UID: \"f97edd8d-a3ad-4339-a4c6-99bc764b5534\") " pod="kube-system/storage-provisioner"
	Oct 19 12:51:25 old-k8s-version-577062 kubelet[1392]: I1019 12:51:25.700099    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/360fd17f-a1ea-4400-85fa-dd78ab44fcbc-config-volume\") pod \"coredns-5dd5756b68-44mqv\" (UID: \"360fd17f-a1ea-4400-85fa-dd78ab44fcbc\") " pod="kube-system/coredns-5dd5756b68-44mqv"
	Oct 19 12:51:25 old-k8s-version-577062 kubelet[1392]: I1019 12:51:25.700155    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4r9j\" (UniqueName: \"kubernetes.io/projected/360fd17f-a1ea-4400-85fa-dd78ab44fcbc-kube-api-access-t4r9j\") pod \"coredns-5dd5756b68-44mqv\" (UID: \"360fd17f-a1ea-4400-85fa-dd78ab44fcbc\") " pod="kube-system/coredns-5dd5756b68-44mqv"
	Oct 19 12:51:26 old-k8s-version-577062 kubelet[1392]: I1019 12:51:26.336020    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-44mqv" podStartSLOduration=15.335964104 podCreationTimestamp="2025-10-19 12:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:26.326058018 +0000 UTC m=+28.214540329" watchObservedRunningTime="2025-10-19 12:51:26.335964104 +0000 UTC m=+28.224446454"
	Oct 19 12:51:26 old-k8s-version-577062 kubelet[1392]: I1019 12:51:26.347969    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.347910512 podCreationTimestamp="2025-10-19 12:51:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:26.336284527 +0000 UTC m=+28.224766837" watchObservedRunningTime="2025-10-19 12:51:26.347910512 +0000 UTC m=+28.236392822"
	Oct 19 12:51:28 old-k8s-version-577062 kubelet[1392]: I1019 12:51:28.264344    1392 topology_manager.go:215] "Topology Admit Handler" podUID="e374ff62-1a16-4b52-84da-3d26c90172cf" podNamespace="default" podName="busybox"
	Oct 19 12:51:28 old-k8s-version-577062 kubelet[1392]: I1019 12:51:28.315168    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpbxb\" (UniqueName: \"kubernetes.io/projected/e374ff62-1a16-4b52-84da-3d26c90172cf-kube-api-access-mpbxb\") pod \"busybox\" (UID: \"e374ff62-1a16-4b52-84da-3d26c90172cf\") " pod="default/busybox"
	Oct 19 12:51:30 old-k8s-version-577062 kubelet[1392]: I1019 12:51:30.342619    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.595543159 podCreationTimestamp="2025-10-19 12:51:28 +0000 UTC" firstStartedPulling="2025-10-19 12:51:28.587233702 +0000 UTC m=+30.475716047" lastFinishedPulling="2025-10-19 12:51:29.334234265 +0000 UTC m=+31.222716569" observedRunningTime="2025-10-19 12:51:30.342191632 +0000 UTC m=+32.230673943" watchObservedRunningTime="2025-10-19 12:51:30.342543681 +0000 UTC m=+32.231025991"
	
	
	==> storage-provisioner [d0ae3f6cc1f00576f5f9990cfc5f32a37e919680ec81c55a327365559a12af5e] <==
	I1019 12:51:25.983239       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:51:25.998324       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:51:25.998455       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 12:51:26.010670       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:51:26.010793       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8829309e-ce84-4b37-8b7e-53ec540533f6", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-577062_77346b78-2fcd-48db-a130-1d3857477154 became leader
	I1019 12:51:26.011635       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-577062_77346b78-2fcd-48db-a130-1d3857477154!
	I1019 12:51:26.112310       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-577062_77346b78-2fcd-48db-a130-1d3857477154!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-577062 -n old-k8s-version-577062
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-577062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-561408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-561408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (453.86484ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:51:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-561408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-561408 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-561408 describe deploy/metrics-server -n kube-system: exit status 1 (58.810465ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-561408 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-561408
helpers_test.go:243: (dbg) docker inspect no-preload-561408:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583",
	        "Created": "2025-10-19T12:50:45.391801747Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 633741,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:50:45.428286158Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583/hostname",
	        "HostsPath": "/var/lib/docker/containers/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583/hosts",
	        "LogPath": "/var/lib/docker/containers/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583-json.log",
	        "Name": "/no-preload-561408",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-561408:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-561408",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583",
	                "LowerDir": "/var/lib/docker/overlay2/6288165495fe743f3168f10ebe2b1785cd769498c22f951727a4dfaac7696c1b-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6288165495fe743f3168f10ebe2b1785cd769498c22f951727a4dfaac7696c1b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6288165495fe743f3168f10ebe2b1785cd769498c22f951727a4dfaac7696c1b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6288165495fe743f3168f10ebe2b1785cd769498c22f951727a4dfaac7696c1b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-561408",
	                "Source": "/var/lib/docker/volumes/no-preload-561408/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-561408",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-561408",
	                "name.minikube.sigs.k8s.io": "no-preload-561408",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d102cf647b420711f31bf265012b1d6690934f28e3780b0f4855af9558dbc72a",
	            "SandboxKey": "/var/run/docker/netns/d102cf647b42",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-561408": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:fe:93:91:1b:90",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4f4a13c0b85cf53d05b4d14cdbcd2a320c735f036b2f0ba0e125d18fecb5483e",
	                    "EndpointID": "4cd3d5279907c6fd3946699475fc8878e92bb2c665c754b89b6d9c5d98c11c9b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-561408",
	                        "a52c329ec080"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-561408 -n no-preload-561408
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-561408 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-561408 logs -n 25: (1.071023388s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-931932 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo docker system info                                                                                                                                 │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cri-dockerd --version                                                                                                                              │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo containerd config dump                                                                                                                             │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-577062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo crio config                                                                                                                                        │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p bridge-931932                                                                                                                                                         │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ stop    │ -p old-k8s-version-577062 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ delete  │ -p disable-driver-mounts-591165                                                                                                                                          │ disable-driver-mounts-591165 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-561408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:51:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:51:40.622501  651601 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:51:40.622765  651601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:51:40.622776  651601 out.go:374] Setting ErrFile to fd 2...
	I1019 12:51:40.622780  651601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:51:40.623003  651601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:51:40.623499  651601 out.go:368] Setting JSON to false
	I1019 12:51:40.624825  651601 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9249,"bootTime":1760869052,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:51:40.624920  651601 start.go:141] virtualization: kvm guest
	I1019 12:51:40.627144  651601 out.go:179] * [default-k8s-diff-port-999693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:51:40.628699  651601 notify.go:220] Checking for updates...
	I1019 12:51:40.628706  651601 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:51:40.630091  651601 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:51:40.631354  651601 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:51:40.632588  651601 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:51:40.633845  651601 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:51:40.635118  651601 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:51:40.636903  651601 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:51:40.637029  651601 config.go:182] Loaded profile config "no-preload-561408": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:51:40.637128  651601 config.go:182] Loaded profile config "old-k8s-version-577062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 12:51:40.637224  651601 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:51:40.661378  651601 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:51:40.661501  651601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:51:40.717835  651601 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-19 12:51:40.707102933 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:51:40.717985  651601 docker.go:318] overlay module found
	I1019 12:51:40.719580  651601 out.go:179] * Using the docker driver based on user configuration
	I1019 12:51:40.720776  651601 start.go:305] selected driver: docker
	I1019 12:51:40.720791  651601 start.go:925] validating driver "docker" against <nil>
	I1019 12:51:40.720802  651601 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:51:40.721352  651601 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:51:40.777137  651601 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-19 12:51:40.767105693 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:51:40.777350  651601 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:51:40.777590  651601 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:51:40.779205  651601 out.go:179] * Using Docker driver with root privileges
	I1019 12:51:40.780416  651601 cni.go:84] Creating CNI manager for ""
	I1019 12:51:40.780492  651601 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:51:40.780506  651601 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:51:40.780557  651601 start.go:349] cluster config:
	{Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:51:40.781769  651601 out.go:179] * Starting "default-k8s-diff-port-999693" primary control-plane node in "default-k8s-diff-port-999693" cluster
	I1019 12:51:40.782950  651601 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:51:40.784234  651601 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:51:40.785319  651601 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:51:40.785361  651601 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:51:40.785356  651601 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:51:40.785370  651601 cache.go:58] Caching tarball of preloaded images
	I1019 12:51:40.785509  651601 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:51:40.785521  651601 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:51:40.785608  651601 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/config.json ...
	I1019 12:51:40.785626  651601 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/config.json: {Name:mk260dc75c8a88a54ba0483b7d3ec6613305382c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:51:40.805786  651601 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:51:40.805811  651601 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:51:40.805829  651601 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:51:40.805861  651601 start.go:360] acquireMachinesLock for default-k8s-diff-port-999693: {Name:mke26e7439408c8adecea1bbb9344a31dd77b3c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:51:40.805975  651601 start.go:364] duration metric: took 93.554µs to acquireMachinesLock for "default-k8s-diff-port-999693"
	I1019 12:51:40.806005  651601 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:51:40.806081  651601 start.go:125] createHost starting for "" (driver="docker")
	W1019 12:51:37.119165  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	W1019 12:51:39.618139  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	I1019 12:51:40.807774  651601 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 12:51:40.808013  651601 start.go:159] libmachine.API.Create for "default-k8s-diff-port-999693" (driver="docker")
	I1019 12:51:40.808047  651601 client.go:168] LocalClient.Create starting
	I1019 12:51:40.808112  651601 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem
	I1019 12:51:40.808154  651601 main.go:141] libmachine: Decoding PEM data...
	I1019 12:51:40.808188  651601 main.go:141] libmachine: Parsing certificate...
	I1019 12:51:40.808266  651601 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem
	I1019 12:51:40.808299  651601 main.go:141] libmachine: Decoding PEM data...
	I1019 12:51:40.808326  651601 main.go:141] libmachine: Parsing certificate...
	I1019 12:51:40.808725  651601 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-999693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:51:40.825554  651601 cli_runner.go:211] docker network inspect default-k8s-diff-port-999693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:51:40.825646  651601 network_create.go:284] running [docker network inspect default-k8s-diff-port-999693] to gather additional debugging logs...
	I1019 12:51:40.825666  651601 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-999693
	W1019 12:51:40.842906  651601 cli_runner.go:211] docker network inspect default-k8s-diff-port-999693 returned with exit code 1
	I1019 12:51:40.842937  651601 network_create.go:287] error running [docker network inspect default-k8s-diff-port-999693]: docker network inspect default-k8s-diff-port-999693: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-999693 not found
	I1019 12:51:40.842950  651601 network_create.go:289] output of [docker network inspect default-k8s-diff-port-999693]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-999693 not found
	
	** /stderr **
	I1019 12:51:40.843086  651601 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:51:40.860550  651601 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a4629926c406 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:8c:3f:62:13:f6} reservation:<nil>}
	I1019 12:51:40.861289  651601 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6cccd776798e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:1b:39:ab:6e:7b} reservation:<nil>}
	I1019 12:51:40.861891  651601 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-91914a6ce07e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:1c:aa:a8:a4:4a} reservation:<nil>}
	I1019 12:51:40.862799  651601 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fcd0a3e89589 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:54:90:aa:5c:46} reservation:<nil>}
	I1019 12:51:40.863637  651601 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee37b0}
	I1019 12:51:40.863666  651601 network_create.go:124] attempt to create docker network default-k8s-diff-port-999693 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1019 12:51:40.863727  651601 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-999693 default-k8s-diff-port-999693
	I1019 12:51:40.925964  651601 network_create.go:108] docker network default-k8s-diff-port-999693 192.168.85.0/24 created
	I1019 12:51:40.926003  651601 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-999693" container
	I1019 12:51:40.926079  651601 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:51:40.945621  651601 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-999693 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-999693 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:51:40.965247  651601 oci.go:103] Successfully created a docker volume default-k8s-diff-port-999693
	I1019 12:51:40.965378  651601 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-999693-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-999693 --entrypoint /usr/bin/test -v default-k8s-diff-port-999693:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:51:41.377573  651601 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-999693
	I1019 12:51:41.377617  651601 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:51:41.377642  651601 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:51:41.377698  651601 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-999693:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 19 12:51:34 no-preload-561408 crio[767]: time="2025-10-19T12:51:34.640148728Z" level=info msg="Starting container: e7d5859f865a12a5cf0baf62454db406362fe50bac43ac6d5d39b3185c61ed43" id=ddf41e16-71c8-4d7d-ab67-981b85981de6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:51:34 no-preload-561408 crio[767]: time="2025-10-19T12:51:34.644776074Z" level=info msg="Started container" PID=2861 containerID=e7d5859f865a12a5cf0baf62454db406362fe50bac43ac6d5d39b3185c61ed43 description=kube-system/coredns-66bc5c9577-pgxlp/coredns id=ddf41e16-71c8-4d7d-ab67-981b85981de6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dea6382ccb424c3f9323903cdbbb51ff72e51f9cc22297587bb278921a857278
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.457478057Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b65bb7e6-5f4b-4801-a43a-b60d7299d078 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.457611773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.463165637Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f9f19e388b14887c6da1bccb0ca4c5984ce60f68bc19e2eb4674f3fc810f94e0 UID:ef865d00-0bef-4438-9c22-1892d84e64cb NetNS:/var/run/netns/264b7817-e046-4b1e-85ea-9e2480793f45 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005a41a8}] Aliases:map[]}"
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.463213714Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.474650616Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f9f19e388b14887c6da1bccb0ca4c5984ce60f68bc19e2eb4674f3fc810f94e0 UID:ef865d00-0bef-4438-9c22-1892d84e64cb NetNS:/var/run/netns/264b7817-e046-4b1e-85ea-9e2480793f45 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005a41a8}] Aliases:map[]}"
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.474788496Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.475797591Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.476806188Z" level=info msg="Ran pod sandbox f9f19e388b14887c6da1bccb0ca4c5984ce60f68bc19e2eb4674f3fc810f94e0 with infra container: default/busybox/POD" id=b65bb7e6-5f4b-4801-a43a-b60d7299d078 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.478038181Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=05ffd75e-10a3-4dd6-960b-64fdbced9400 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.478180198Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=05ffd75e-10a3-4dd6-960b-64fdbced9400 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.478229434Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=05ffd75e-10a3-4dd6-960b-64fdbced9400 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.478922768Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f0a3de55-6655-4b92-ba19-f8513892084c name=/runtime.v1.ImageService/PullImage
	Oct 19 12:51:37 no-preload-561408 crio[767]: time="2025-10-19T12:51:37.480558725Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 12:51:38 no-preload-561408 crio[767]: time="2025-10-19T12:51:38.188870515Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f0a3de55-6655-4b92-ba19-f8513892084c name=/runtime.v1.ImageService/PullImage
	Oct 19 12:51:38 no-preload-561408 crio[767]: time="2025-10-19T12:51:38.189616977Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=aa42bb47-6da6-4af4-a3bb-46ff21436afc name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:51:38 no-preload-561408 crio[767]: time="2025-10-19T12:51:38.191398597Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b49aa257-0986-4a05-b567-bbabc30388b6 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:51:38 no-preload-561408 crio[767]: time="2025-10-19T12:51:38.196764762Z" level=info msg="Creating container: default/busybox/busybox" id=404dea44-a444-48ca-bdfb-f784db6852ec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:51:38 no-preload-561408 crio[767]: time="2025-10-19T12:51:38.197468574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:51:38 no-preload-561408 crio[767]: time="2025-10-19T12:51:38.20093617Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:51:38 no-preload-561408 crio[767]: time="2025-10-19T12:51:38.201447503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:51:38 no-preload-561408 crio[767]: time="2025-10-19T12:51:38.226249074Z" level=info msg="Created container 26e617cfe37c5f4e3ac5e8318c1b27aeeae848516e4b5e557ff492dc047eb3c1: default/busybox/busybox" id=404dea44-a444-48ca-bdfb-f784db6852ec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:51:38 no-preload-561408 crio[767]: time="2025-10-19T12:51:38.226966825Z" level=info msg="Starting container: 26e617cfe37c5f4e3ac5e8318c1b27aeeae848516e4b5e557ff492dc047eb3c1" id=fa24ed63-7823-43f2-80d6-a9fbf70d41e5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:51:38 no-preload-561408 crio[767]: time="2025-10-19T12:51:38.228901486Z" level=info msg="Started container" PID=2931 containerID=26e617cfe37c5f4e3ac5e8318c1b27aeeae848516e4b5e557ff492dc047eb3c1 description=default/busybox/busybox id=fa24ed63-7823-43f2-80d6-a9fbf70d41e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9f19e388b14887c6da1bccb0ca4c5984ce60f68bc19e2eb4674f3fc810f94e0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	26e617cfe37c5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   f9f19e388b148       busybox                                     default
	e7d5859f865a1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   dea6382ccb424       coredns-66bc5c9577-pgxlp                    kube-system
	11ef2bee0cac6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   809a7cf5e75c4       storage-provisioner                         kube-system
	7d638393a1def       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   c897a5dbc7472       kindnet-kq4cq                               kube-system
	e89f6a9c7b69d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   f977b749a7815       kube-proxy-lppwp                            kube-system
	c116de584e3cf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      36 seconds ago      Running             etcd                      0                   6ddd99e84d180       etcd-no-preload-561408                      kube-system
	94eddb4f2dcbb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      36 seconds ago      Running             kube-controller-manager   0                   188d3c7b6dc57       kube-controller-manager-no-preload-561408   kube-system
	231c95d7ec944       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      36 seconds ago      Running             kube-scheduler            0                   8fba07e0c8940       kube-scheduler-no-preload-561408            kube-system
	8e275a88094ee       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      36 seconds ago      Running             kube-apiserver            0                   66183e4ff8296       kube-apiserver-no-preload-561408            kube-system
	
	
	==> coredns [e7d5859f865a12a5cf0baf62454db406362fe50bac43ac6d5d39b3185c61ed43] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44299 - 38869 "HINFO IN 8833751227145718205.2642047550578934962. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0834053s
	
	
	==> describe nodes <==
	Name:               no-preload-561408
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-561408
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=no-preload-561408
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_51_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:51:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-561408
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:51:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:51:45 +0000   Sun, 19 Oct 2025 12:51:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:51:45 +0000   Sun, 19 Oct 2025 12:51:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:51:45 +0000   Sun, 19 Oct 2025 12:51:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:51:45 +0000   Sun, 19 Oct 2025 12:51:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-561408
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                7f18081e-0db1-4ca2-b083-85e9821fdde2
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-pgxlp                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-561408                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-kq4cq                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-561408             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-561408    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-lppwp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-561408             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node no-preload-561408 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node no-preload-561408 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node no-preload-561408 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node no-preload-561408 event: Registered Node no-preload-561408 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-561408 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [c116de584e3cf4377ae9d295e1a5c47f78ff458e95240c210a0ff6cbf4b99acb] <==
	{"level":"warn","ts":"2025-10-19T12:51:11.507999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:11.516080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:11.526529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:11.533874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:11.546222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:11.550682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:11.557438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:11.566103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:11.572631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:11.588082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:11.599521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:11.606515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:11.654632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44426","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T12:51:12.415498Z","caller":"traceutil/trace.go:172","msg":"trace[616530354] linearizableReadLoop","detail":"{readStateIndex:6; appliedIndex:6; }","duration":"151.33062ms","start":"2025-10-19T12:51:12.264140Z","end":"2025-10-19T12:51:12.415471Z","steps":["trace[616530354] 'read index received'  (duration: 151.32205ms)","trace[616530354] 'applied index is now lower than readState.Index'  (duration: 7.069µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T12:51:12.415667Z","caller":"traceutil/trace.go:172","msg":"trace[1065525489] transaction","detail":"{read_only:false; response_revision:4; number_of_response:1; }","duration":"152.866367ms","start":"2025-10-19T12:51:12.262773Z","end":"2025-10-19T12:51:12.415639Z","steps":["trace[1065525489] 'process raft request'  (duration: 152.76313ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T12:51:12.415681Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.481539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-19T12:51:12.415802Z","caller":"traceutil/trace.go:172","msg":"trace[1047071347] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:3; }","duration":"151.654227ms","start":"2025-10-19T12:51:12.264135Z","end":"2025-10-19T12:51:12.415789Z","steps":["trace[1047071347] 'agreement among raft nodes before linearized reading'  (duration: 151.441847ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T12:51:12.434502Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.4193ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-19T12:51:12.434564Z","caller":"traceutil/trace.go:172","msg":"trace[1747674959] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:0; response_revision:4; }","duration":"153.495964ms","start":"2025-10-19T12:51:12.281055Z","end":"2025-10-19T12:51:12.434551Z","steps":["trace[1747674959] 'agreement among raft nodes before linearized reading'  (duration: 153.340563ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T12:51:12.434787Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.646773ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-19T12:51:12.434814Z","caller":"traceutil/trace.go:172","msg":"trace[892234459] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:0; response_revision:11; }","duration":"139.676382ms","start":"2025-10-19T12:51:12.295128Z","end":"2025-10-19T12:51:12.434805Z","steps":["trace[892234459] 'agreement among raft nodes before linearized reading'  (duration: 139.625613ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T12:51:12.434916Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.028272ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/no-preload-561408\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-19T12:51:12.434943Z","caller":"traceutil/trace.go:172","msg":"trace[471360041] range","detail":"{range_begin:/registry/csinodes/no-preload-561408; range_end:; response_count:0; response_revision:12; }","duration":"150.052219ms","start":"2025-10-19T12:51:12.284880Z","end":"2025-10-19T12:51:12.434932Z","steps":["trace[471360041] 'agreement among raft nodes before linearized reading'  (duration: 150.010611ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T12:51:13.596646Z","caller":"traceutil/trace.go:172","msg":"trace[1597138299] transaction","detail":"{read_only:false; response_revision:110; number_of_response:1; }","duration":"104.118488ms","start":"2025-10-19T12:51:13.492505Z","end":"2025-10-19T12:51:13.596624Z","steps":["trace[1597138299] 'process raft request'  (duration: 32.936286ms)","trace[1597138299] 'compare'  (duration: 71.020219ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-19T12:51:44.857861Z","caller":"traceutil/trace.go:172","msg":"trace[251716891] transaction","detail":"{read_only:false; response_revision:430; number_of_response:1; }","duration":"123.968241ms","start":"2025-10-19T12:51:44.733876Z","end":"2025-10-19T12:51:44.857844Z","steps":["trace[251716891] 'process raft request'  (duration: 123.826087ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:51:46 up  2:34,  0 user,  load average: 7.66, 5.22, 3.08
	Linux no-preload-561408 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7d638393a1deffc8eb3f14bae312c116f84aa91c3e567ed0049cee50a52237a8] <==
	I1019 12:51:23.644397       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:51:23.644670       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1019 12:51:23.644858       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:51:23.644881       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:51:23.644921       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:51:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:51:23.847452       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:51:23.847481       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:51:23.847493       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:51:23.847643       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:51:24.048373       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:51:24.048410       1 metrics.go:72] Registering metrics
	I1019 12:51:24.139663       1 controller.go:711] "Syncing nftables rules"
	I1019 12:51:33.851548       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1019 12:51:33.851620       1 main.go:301] handling current node
	I1019 12:51:43.850846       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1019 12:51:43.850876       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8e275a88094eefd31d1d367d2b09d630bb52d4f264db2c3dd42982c9e351d141] <==
	I1019 12:51:12.439545       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:51:12.439645       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1019 12:51:12.442944       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1019 12:51:12.466456       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1019 12:51:12.475547       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:51:12.475743       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:51:12.647302       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:51:13.148983       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 12:51:13.157021       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 12:51:13.157043       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:51:14.229296       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:51:14.288607       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:51:14.353418       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 12:51:14.363459       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1019 12:51:14.364889       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:51:14.370826       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:51:15.173769       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:51:15.179472       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:51:15.188737       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 12:51:15.196541       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 12:51:21.069418       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:51:21.125038       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:51:21.129704       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:51:21.263495       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1019 12:51:45.237281       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:41094: use of closed network connection
	
	
	==> kube-controller-manager [94eddb4f2dcbbc23ed0af3c35f39da547d6ba886258a9800bdec2e04418a7498] <==
	I1019 12:51:20.210884       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 12:51:20.210875       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 12:51:20.210959       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 12:51:20.211027       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 12:51:20.211202       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 12:51:20.211228       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 12:51:20.212098       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:51:20.212109       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 12:51:20.212221       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 12:51:20.212316       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 12:51:20.212319       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 12:51:20.212347       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 12:51:20.212371       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 12:51:20.212392       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 12:51:20.212325       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 12:51:20.214889       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 12:51:20.220198       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:51:20.220198       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:51:20.225397       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:51:20.225413       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 12:51:20.225433       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 12:51:20.231783       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 12:51:20.235005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:51:20.237152       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:51:35.211719       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e89f6a9c7b69dfa10f403a6856dbc255d652801f7760e8ba6a204bede6da0127] <==
	I1019 12:51:21.678025       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:51:21.726164       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:51:21.827171       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:51:21.827207       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1019 12:51:21.827300       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:51:21.846002       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:51:21.846053       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:51:21.850897       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:51:21.851195       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:51:21.851223       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:51:21.852323       1 config.go:200] "Starting service config controller"
	I1019 12:51:21.852355       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:51:21.852368       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:51:21.852395       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:51:21.852499       1 config.go:309] "Starting node config controller"
	I1019 12:51:21.852509       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:51:21.852530       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:51:21.852538       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:51:21.952546       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 12:51:21.952572       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:51:21.952604       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:51:21.952641       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [231c95d7ec9449d542bf14290abc4e3c068c7ee284e4835024e2dc70c4bc2dd3] <==
	E1019 12:51:12.577676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:51:12.577740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:51:12.577839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:51:12.577567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 12:51:12.577870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:51:12.577806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:51:12.577979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:51:12.578084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:51:12.578137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:51:13.388471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:51:13.395263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:51:13.403244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:51:13.421578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:51:13.421652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:51:13.535517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:51:13.543596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:51:13.546956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 12:51:13.620164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:51:13.685211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:51:13.687492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:51:13.727324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:51:13.747710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:51:13.762609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 12:51:13.785466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1019 12:51:16.363089       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:51:16 no-preload-561408 kubelet[2254]: I1019 12:51:16.088895    2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-561408" podStartSLOduration=1.08887397 podStartE2EDuration="1.08887397s" podCreationTimestamp="2025-10-19 12:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:16.080603647 +0000 UTC m=+1.141826872" watchObservedRunningTime="2025-10-19 12:51:16.08887397 +0000 UTC m=+1.150097185"
	Oct 19 12:51:16 no-preload-561408 kubelet[2254]: I1019 12:51:16.099267    2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-561408" podStartSLOduration=1.099247271 podStartE2EDuration="1.099247271s" podCreationTimestamp="2025-10-19 12:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:16.088821098 +0000 UTC m=+1.150044322" watchObservedRunningTime="2025-10-19 12:51:16.099247271 +0000 UTC m=+1.160470495"
	Oct 19 12:51:16 no-preload-561408 kubelet[2254]: I1019 12:51:16.110592    2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-561408" podStartSLOduration=1.11056441 podStartE2EDuration="1.11056441s" podCreationTimestamp="2025-10-19 12:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:16.099155117 +0000 UTC m=+1.160378341" watchObservedRunningTime="2025-10-19 12:51:16.11056441 +0000 UTC m=+1.171787634"
	Oct 19 12:51:16 no-preload-561408 kubelet[2254]: I1019 12:51:16.110931    2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-561408" podStartSLOduration=2.110914282 podStartE2EDuration="2.110914282s" podCreationTimestamp="2025-10-19 12:51:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:16.110859567 +0000 UTC m=+1.172082786" watchObservedRunningTime="2025-10-19 12:51:16.110914282 +0000 UTC m=+1.172137506"
	Oct 19 12:51:20 no-preload-561408 kubelet[2254]: I1019 12:51:20.265459    2254 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 12:51:20 no-preload-561408 kubelet[2254]: I1019 12:51:20.266170    2254 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 12:51:21 no-preload-561408 kubelet[2254]: I1019 12:51:21.361016    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e5712d3-d393-4b98-8346-442229d87b07-lib-modules\") pod \"kindnet-kq4cq\" (UID: \"1e5712d3-d393-4b98-8346-442229d87b07\") " pod="kube-system/kindnet-kq4cq"
	Oct 19 12:51:21 no-preload-561408 kubelet[2254]: I1019 12:51:21.361057    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf6aee53-b434-4009-aeb6-36cb62fc0769-lib-modules\") pod \"kube-proxy-lppwp\" (UID: \"cf6aee53-b434-4009-aeb6-36cb62fc0769\") " pod="kube-system/kube-proxy-lppwp"
	Oct 19 12:51:21 no-preload-561408 kubelet[2254]: I1019 12:51:21.361078    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1e5712d3-d393-4b98-8346-442229d87b07-cni-cfg\") pod \"kindnet-kq4cq\" (UID: \"1e5712d3-d393-4b98-8346-442229d87b07\") " pod="kube-system/kindnet-kq4cq"
	Oct 19 12:51:21 no-preload-561408 kubelet[2254]: I1019 12:51:21.361150    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd2p5\" (UniqueName: \"kubernetes.io/projected/1e5712d3-d393-4b98-8346-442229d87b07-kube-api-access-jd2p5\") pod \"kindnet-kq4cq\" (UID: \"1e5712d3-d393-4b98-8346-442229d87b07\") " pod="kube-system/kindnet-kq4cq"
	Oct 19 12:51:21 no-preload-561408 kubelet[2254]: I1019 12:51:21.361191    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf6aee53-b434-4009-aeb6-36cb62fc0769-xtables-lock\") pod \"kube-proxy-lppwp\" (UID: \"cf6aee53-b434-4009-aeb6-36cb62fc0769\") " pod="kube-system/kube-proxy-lppwp"
	Oct 19 12:51:21 no-preload-561408 kubelet[2254]: I1019 12:51:21.361226    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2k75\" (UniqueName: \"kubernetes.io/projected/cf6aee53-b434-4009-aeb6-36cb62fc0769-kube-api-access-s2k75\") pod \"kube-proxy-lppwp\" (UID: \"cf6aee53-b434-4009-aeb6-36cb62fc0769\") " pod="kube-system/kube-proxy-lppwp"
	Oct 19 12:51:21 no-preload-561408 kubelet[2254]: I1019 12:51:21.361255    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e5712d3-d393-4b98-8346-442229d87b07-xtables-lock\") pod \"kindnet-kq4cq\" (UID: \"1e5712d3-d393-4b98-8346-442229d87b07\") " pod="kube-system/kindnet-kq4cq"
	Oct 19 12:51:21 no-preload-561408 kubelet[2254]: I1019 12:51:21.361317    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf6aee53-b434-4009-aeb6-36cb62fc0769-kube-proxy\") pod \"kube-proxy-lppwp\" (UID: \"cf6aee53-b434-4009-aeb6-36cb62fc0769\") " pod="kube-system/kube-proxy-lppwp"
	Oct 19 12:51:22 no-preload-561408 kubelet[2254]: I1019 12:51:22.072590    2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lppwp" podStartSLOduration=1.072568643 podStartE2EDuration="1.072568643s" podCreationTimestamp="2025-10-19 12:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:22.072503502 +0000 UTC m=+7.133726725" watchObservedRunningTime="2025-10-19 12:51:22.072568643 +0000 UTC m=+7.133791869"
	Oct 19 12:51:24 no-preload-561408 kubelet[2254]: I1019 12:51:24.846170    2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kq4cq" podStartSLOduration=2.00591313 podStartE2EDuration="3.846147933s" podCreationTimestamp="2025-10-19 12:51:21 +0000 UTC" firstStartedPulling="2025-10-19 12:51:21.600610902 +0000 UTC m=+6.661834117" lastFinishedPulling="2025-10-19 12:51:23.440845715 +0000 UTC m=+8.502068920" observedRunningTime="2025-10-19 12:51:24.077706695 +0000 UTC m=+9.138929931" watchObservedRunningTime="2025-10-19 12:51:24.846147933 +0000 UTC m=+9.907371154"
	Oct 19 12:51:34 no-preload-561408 kubelet[2254]: I1019 12:51:34.186530    2254 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 12:51:34 no-preload-561408 kubelet[2254]: I1019 12:51:34.256795    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e8c92cd5-cb77-4b3d-bc5a-20b606b8794d-tmp\") pod \"storage-provisioner\" (UID: \"e8c92cd5-cb77-4b3d-bc5a-20b606b8794d\") " pod="kube-system/storage-provisioner"
	Oct 19 12:51:34 no-preload-561408 kubelet[2254]: I1019 12:51:34.256860    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9rz2\" (UniqueName: \"kubernetes.io/projected/e8c92cd5-cb77-4b3d-bc5a-20b606b8794d-kube-api-access-f9rz2\") pod \"storage-provisioner\" (UID: \"e8c92cd5-cb77-4b3d-bc5a-20b606b8794d\") " pod="kube-system/storage-provisioner"
	Oct 19 12:51:34 no-preload-561408 kubelet[2254]: I1019 12:51:34.256890    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af0816b7-b4de-4d64-a4bb-0efbc821bb53-config-volume\") pod \"coredns-66bc5c9577-pgxlp\" (UID: \"af0816b7-b4de-4d64-a4bb-0efbc821bb53\") " pod="kube-system/coredns-66bc5c9577-pgxlp"
	Oct 19 12:51:34 no-preload-561408 kubelet[2254]: I1019 12:51:34.256982    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2rd2\" (UniqueName: \"kubernetes.io/projected/af0816b7-b4de-4d64-a4bb-0efbc821bb53-kube-api-access-d2rd2\") pod \"coredns-66bc5c9577-pgxlp\" (UID: \"af0816b7-b4de-4d64-a4bb-0efbc821bb53\") " pod="kube-system/coredns-66bc5c9577-pgxlp"
	Oct 19 12:51:35 no-preload-561408 kubelet[2254]: I1019 12:51:35.110556    2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.110534627 podStartE2EDuration="14.110534627s" podCreationTimestamp="2025-10-19 12:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:35.110101449 +0000 UTC m=+20.171324673" watchObservedRunningTime="2025-10-19 12:51:35.110534627 +0000 UTC m=+20.171757851"
	Oct 19 12:51:37 no-preload-561408 kubelet[2254]: I1019 12:51:37.151149    2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pgxlp" podStartSLOduration=16.151122643 podStartE2EDuration="16.151122643s" podCreationTimestamp="2025-10-19 12:51:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:35.126658704 +0000 UTC m=+20.187881929" watchObservedRunningTime="2025-10-19 12:51:37.151122643 +0000 UTC m=+22.212345869"
	Oct 19 12:51:37 no-preload-561408 kubelet[2254]: I1019 12:51:37.277283    2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4c8b\" (UniqueName: \"kubernetes.io/projected/ef865d00-0bef-4438-9c22-1892d84e64cb-kube-api-access-c4c8b\") pod \"busybox\" (UID: \"ef865d00-0bef-4438-9c22-1892d84e64cb\") " pod="default/busybox"
	Oct 19 12:51:39 no-preload-561408 kubelet[2254]: I1019 12:51:39.122136    2254 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.409839944 podStartE2EDuration="2.122116868s" podCreationTimestamp="2025-10-19 12:51:37 +0000 UTC" firstStartedPulling="2025-10-19 12:51:37.478508399 +0000 UTC m=+22.539731606" lastFinishedPulling="2025-10-19 12:51:38.190785324 +0000 UTC m=+23.252008530" observedRunningTime="2025-10-19 12:51:39.121977609 +0000 UTC m=+24.183200835" watchObservedRunningTime="2025-10-19 12:51:39.122116868 +0000 UTC m=+24.183340093"
	
	
	==> storage-provisioner [11ef2bee0cac6cf3f5cb97ac6d2d68c6732e3c38eafbc3ff3123e77f11f6390d] <==
	I1019 12:51:34.638398       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:51:34.651840       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:51:34.653243       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 12:51:34.666120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:34.677621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:51:34.677815       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:51:34.678095       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-561408_0fc67bd0-324d-47ff-8261-7b2409eb07f6!
	I1019 12:51:34.678160       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2a2da65-ffdf-4b5c-be11-c5e8f123ddea", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-561408_0fc67bd0-324d-47ff-8261-7b2409eb07f6 became leader
	W1019 12:51:34.686706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:34.692683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:51:34.779160       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-561408_0fc67bd0-324d-47ff-8261-7b2409eb07f6!
	W1019 12:51:36.696455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:36.702352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:38.705629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:38.710049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:40.713260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:40.719799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:42.722563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:42.727342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:44.731641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:44.858981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:46.862399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:51:46.867613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-561408 -n no-preload-561408
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-561408 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (258.789961ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:52:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-123864 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-123864 describe deploy/metrics-server -n kube-system: exit status 1 (56.898482ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-123864 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-123864
helpers_test.go:243: (dbg) docker inspect embed-certs-123864:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509",
	        "Created": "2025-10-19T12:51:12.601870775Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 642924,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:51:12.678012516Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509/hostname",
	        "HostsPath": "/var/lib/docker/containers/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509/hosts",
	        "LogPath": "/var/lib/docker/containers/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509-json.log",
	        "Name": "/embed-certs-123864",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-123864:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-123864",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509",
	                "LowerDir": "/var/lib/docker/overlay2/a47111221e0d12e9bca77267d9c1c9e4f1c802b0874f893ca4a091ad9fba6418-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a47111221e0d12e9bca77267d9c1c9e4f1c802b0874f893ca4a091ad9fba6418/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a47111221e0d12e9bca77267d9c1c9e4f1c802b0874f893ca4a091ad9fba6418/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a47111221e0d12e9bca77267d9c1c9e4f1c802b0874f893ca4a091ad9fba6418/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-123864",
	                "Source": "/var/lib/docker/volumes/embed-certs-123864/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-123864",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-123864",
	                "name.minikube.sigs.k8s.io": "embed-certs-123864",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0d0d0b77a6f10ab287f4dac62f7b8346b5a312301c87236c3ad13ef4e383778",
	            "SandboxKey": "/var/run/docker/netns/a0d0d0b77a6f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-123864": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:c8:dc:7a:c1:5e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fcd0a3e89589b9fe587e991244f1cb1f39b034b86cfecd1e038afdfb125c5bb4",
	                    "EndpointID": "cdc86c5e8769e87ba1ddc267b872cf201eff91ce52925752bfad555daee8a745",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-123864",
	                        "53e8a5bc9e53"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-123864 -n embed-certs-123864
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-123864 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-931932 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo containerd config dump                                                                                                                                                                                                  │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-577062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo crio config                                                                                                                                                                                                             │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p bridge-931932                                                                                                                                                                                                                              │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ stop    │ -p old-k8s-version-577062 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p disable-driver-mounts-591165                                                                                                                                                                                                               │ disable-driver-mounts-591165 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-561408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ stop    │ -p no-preload-561408 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-577062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-561408 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:52:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:52:06.021536  657553 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:52:06.021680  657553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:52:06.021694  657553 out.go:374] Setting ErrFile to fd 2...
	I1019 12:52:06.021700  657553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:52:06.022131  657553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:52:06.022798  657553 out.go:368] Setting JSON to false
	I1019 12:52:06.025052  657553 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9274,"bootTime":1760869052,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:52:06.025253  657553 start.go:141] virtualization: kvm guest
	I1019 12:52:06.027267  657553 out.go:179] * [no-preload-561408] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:52:06.030479  657553 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:52:06.030471  657553 notify.go:220] Checking for updates...
	I1019 12:52:06.032737  657553 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:52:06.033834  657553 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:06.034905  657553 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:52:06.035945  657553 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:52:06.037060  657553 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:52:06.039056  657553 config.go:182] Loaded profile config "no-preload-561408": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:06.039841  657553 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:52:06.079502  657553 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:52:06.079756  657553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:52:06.176630  657553 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 12:52:06.161981405 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:52:06.176787  657553 docker.go:318] overlay module found
	I1019 12:52:06.178799  657553 out.go:179] * Using the docker driver based on existing profile
	I1019 12:52:06.180448  657553 start.go:305] selected driver: docker
	I1019 12:52:06.180466  657553 start.go:925] validating driver "docker" against &{Name:no-preload-561408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-561408 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:06.180576  657553 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:52:06.181482  657553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:52:06.302353  657553 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 12:52:06.286479089 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:52:06.303099  657553 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:06.303183  657553 cni.go:84] Creating CNI manager for ""
	I1019 12:52:06.303319  657553 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:06.303502  657553 start.go:349] cluster config:
	{Name:no-preload-561408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-561408 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:06.316881  657553 out.go:179] * Starting "no-preload-561408" primary control-plane node in "no-preload-561408" cluster
	I1019 12:52:06.318615  657553 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:52:06.322526  657553 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:52:06.327640  657553 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:06.327738  657553 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:52:06.328069  657553 cache.go:107] acquiring lock: {Name:mk5550171751fb66fbb8bbbf1840689496877f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328174  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1019 12:52:06.328185  657553 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 134.911µs
	I1019 12:52:06.328199  657553 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/config.json ...
	I1019 12:52:06.328214  657553 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1019 12:52:06.328234  657553 cache.go:107] acquiring lock: {Name:mk536b3e79f3c82320f5fd1d75cba698777893be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328282  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1019 12:52:06.328289  657553 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 60.101µs
	I1019 12:52:06.328297  657553 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1019 12:52:06.328309  657553 cache.go:107] acquiring lock: {Name:mke024304bcffc4ea281303157bf5c91e9430bca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328345  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1019 12:52:06.328352  657553 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 44.643µs
	I1019 12:52:06.328360  657553 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1019 12:52:06.328372  657553 cache.go:107] acquiring lock: {Name:mkd16e2a6ab077ae9b611f70a18ddfb328ed7273 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328405  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1019 12:52:06.328411  657553 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 40.864µs
	I1019 12:52:06.328418  657553 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1019 12:52:06.328451  657553 cache.go:107] acquiring lock: {Name:mk44b800128ce65419f1f04875d5a608ed0e5a0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328495  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1019 12:52:06.328503  657553 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 54.974µs
	I1019 12:52:06.328469  657553 cache.go:107] acquiring lock: {Name:mk553f2fd2502ef0a79fb07ecb498f641a6bf044 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328524  657553 cache.go:107] acquiring lock: {Name:mk45e746ba750b6a63de6802a26f6a78ae57ea53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328562  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1019 12:52:06.328559  657553 cache.go:107] acquiring lock: {Name:mk9ddd4589a738691f68fcba3df7072d33f92e6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328592  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1019 12:52:06.328600  657553 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 77.93µs
	I1019 12:52:06.328605  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1019 12:52:06.328608  657553 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1019 12:52:06.328513  657553 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1019 12:52:06.328571  657553 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 112.184µs
	I1019 12:52:06.328614  657553 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 57.288µs
	I1019 12:52:06.328618  657553 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1019 12:52:06.328622  657553 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1019 12:52:06.328631  657553 cache.go:87] Successfully saved all images to host disk.
	I1019 12:52:06.359097  657553 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:52:06.359196  657553 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:52:06.359249  657553 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:52:06.359286  657553 start.go:360] acquireMachinesLock for no-preload-561408: {Name:mk03a123c2e4ac5bfd3445ed8fbfda61388ba21c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.359354  657553 start.go:364] duration metric: took 50.702µs to acquireMachinesLock for "no-preload-561408"
	I1019 12:52:06.359374  657553 start.go:96] Skipping create...Using existing machine configuration
	I1019 12:52:06.359380  657553 fix.go:54] fixHost starting: 
	I1019 12:52:06.359721  657553 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:52:06.386839  657553 fix.go:112] recreateIfNeeded on no-preload-561408: state=Stopped err=<nil>
	W1019 12:52:06.386966  657553 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 12:52:05.797514  651601 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:05.797581  651601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:52:05.797681  651601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:05.830112  651601 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:05.830214  651601 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:05.830344  651601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:05.830681  651601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33475 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:05.854590  651601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33475 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:05.873824  651601 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:52:05.947416  651601 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:05.966348  651601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:05.974187  651601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:06.176456  651601 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1019 12:52:06.178096  651601 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-999693" to be "Ready" ...
	I1019 12:52:06.491995  651601 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 12:52:03.856637  655442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-577062
	I1019 12:52:03.857815  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 12:52:03.857839  655442 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:52:03.857894  655442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-577062
	I1019 12:52:03.889135  655442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/old-k8s-version-577062/id_rsa Username:docker}
	I1019 12:52:03.892092  655442 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:03.892113  655442 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:03.892174  655442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-577062
	I1019 12:52:03.896084  655442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/old-k8s-version-577062/id_rsa Username:docker}
	I1019 12:52:03.924853  655442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/old-k8s-version-577062/id_rsa Username:docker}
	I1019 12:52:04.009788  655442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:04.011166  655442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:04.018393  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:52:04.018415  655442 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:52:04.028714  655442 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-577062" to be "Ready" ...
	I1019 12:52:04.035509  655442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:04.038340  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:52:04.038360  655442 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:52:04.054689  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:52:04.054717  655442 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:52:04.072845  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:52:04.072930  655442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:52:04.091760  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:52:04.091798  655442 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:52:04.107831  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:52:04.107854  655442 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:52:04.124181  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:52:04.124212  655442 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:52:04.137076  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:52:04.137103  655442 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:52:04.150088  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:04.150151  655442 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:52:04.163719  655442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:06.275572  655442 node_ready.go:49] node "old-k8s-version-577062" is "Ready"
	I1019 12:52:06.275617  655442 node_ready.go:38] duration metric: took 2.246864402s for node "old-k8s-version-577062" to be "Ready" ...
	I1019 12:52:06.275636  655442 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:06.275697  655442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1019 12:52:02.118320  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	W1019 12:52:04.119176  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	W1019 12:52:06.121056  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	I1019 12:52:07.100520  655442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.08931381s)
	I1019 12:52:07.100640  655442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.065093835s)
	I1019 12:52:07.463378  655442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.299599944s)
	I1019 12:52:07.463378  655442 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.187647784s)
	I1019 12:52:07.463513  655442 api_server.go:72] duration metric: took 3.636756668s to wait for apiserver process to appear ...
	I1019 12:52:07.463666  655442 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:07.463692  655442 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1019 12:52:07.465543  655442 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-577062 addons enable metrics-server
	
	I1019 12:52:07.467221  655442 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1019 12:52:06.493155  651601 addons.go:514] duration metric: took 737.35716ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 12:52:06.682068  651601 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-999693" context rescaled to 1 replicas
	W1019 12:52:08.180972  651601 node_ready.go:57] node "default-k8s-diff-port-999693" has "Ready":"False" status (will retry)
	W1019 12:52:10.181291  651601 node_ready.go:57] node "default-k8s-diff-port-999693" has "Ready":"False" status (will retry)
	I1019 12:52:06.389885  657553 out.go:252] * Restarting existing docker container for "no-preload-561408" ...
	I1019 12:52:06.390067  657553 cli_runner.go:164] Run: docker start no-preload-561408
	I1019 12:52:06.684685  657553 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:52:06.714763  657553 kic.go:430] container "no-preload-561408" state is running.
	I1019 12:52:06.715254  657553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-561408
	I1019 12:52:06.739105  657553 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/config.json ...
	I1019 12:52:06.739457  657553 machine.go:93] provisionDockerMachine start ...
	I1019 12:52:06.739597  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:06.760367  657553 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:06.760837  657553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1019 12:52:06.760854  657553 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:52:06.761610  657553 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47010->127.0.0.1:33485: read: connection reset by peer
	I1019 12:52:09.897308  657553 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-561408
	
	I1019 12:52:09.897338  657553 ubuntu.go:182] provisioning hostname "no-preload-561408"
	I1019 12:52:09.897406  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:09.915936  657553 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:09.916189  657553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1019 12:52:09.916204  657553 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-561408 && echo "no-preload-561408" | sudo tee /etc/hostname
	I1019 12:52:10.058776  657553 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-561408
	
	I1019 12:52:10.058861  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:10.077237  657553 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:10.077531  657553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1019 12:52:10.077559  657553 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-561408' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-561408/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-561408' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:52:10.213384  657553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:52:10.213449  657553 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:52:10.213484  657553 ubuntu.go:190] setting up certificates
	I1019 12:52:10.213500  657553 provision.go:84] configureAuth start
	I1019 12:52:10.213581  657553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-561408
	I1019 12:52:10.232099  657553 provision.go:143] copyHostCerts
	I1019 12:52:10.232168  657553 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:52:10.232188  657553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:52:10.232264  657553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:52:10.232389  657553 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:52:10.232402  657553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:52:10.232479  657553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:52:10.232584  657553 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:52:10.232596  657553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:52:10.232646  657553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:52:10.232742  657553 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.no-preload-561408 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-561408]
	I1019 12:52:10.474544  657553 provision.go:177] copyRemoteCerts
	I1019 12:52:10.474615  657553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:52:10.474662  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:10.493763  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:10.591892  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:52:10.610126  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:52:10.627794  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 12:52:10.645956  657553 provision.go:87] duration metric: took 432.438635ms to configureAuth
	I1019 12:52:10.645981  657553 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:52:10.646136  657553 config.go:182] Loaded profile config "no-preload-561408": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:10.646253  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:10.664566  657553 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:10.664836  657553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1019 12:52:10.664862  657553 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:52:10.954783  657553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:52:10.954815  657553 machine.go:96] duration metric: took 4.215306426s to provisionDockerMachine
	I1019 12:52:10.954831  657553 start.go:293] postStartSetup for "no-preload-561408" (driver="docker")
	I1019 12:52:10.954845  657553 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:52:10.954911  657553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:52:10.954960  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:10.973873  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:07.468334  655442 addons.go:514] duration metric: took 3.64150505s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1019 12:52:07.469589  655442 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1019 12:52:07.471181  655442 api_server.go:141] control plane version: v1.28.0
	I1019 12:52:07.471205  655442 api_server.go:131] duration metric: took 7.529968ms to wait for apiserver health ...
	I1019 12:52:07.471215  655442 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:07.476389  655442 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:07.476457  655442 system_pods.go:61] "coredns-5dd5756b68-44mqv" [360fd17f-a1ea-4400-85fa-dd78ab44fcbc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:07.476471  655442 system_pods.go:61] "etcd-old-k8s-version-577062" [1561017e-3d8c-4abb-b580-ea4eac44212a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:07.476481  655442 system_pods.go:61] "kindnet-2h26b" [357fe2d6-42b8-4f53-aa84-9fde0f804ee8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:07.476494  655442 system_pods.go:61] "kube-apiserver-old-k8s-version-577062" [836bda6f-5d8c-4bbc-833c-c563da74cbbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:07.476507  655442 system_pods.go:61] "kube-controller-manager-old-k8s-version-577062" [444afdc9-ca27-4986-9684-d3b8c191a406] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:07.476517  655442 system_pods.go:61] "kube-proxy-lhths" [3dba9194-393b-4f18-a6e5-057bd803c642] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:07.476574  655442 system_pods.go:61] "kube-scheduler-old-k8s-version-577062" [12c61412-0e63-4451-8b6d-70992b408f0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:07.476583  655442 system_pods.go:61] "storage-provisioner" [f97edd8d-a3ad-4339-a4c6-99bc764b5534] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:07.476594  655442 system_pods.go:74] duration metric: took 5.37114ms to wait for pod list to return data ...
	I1019 12:52:07.476608  655442 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:07.480455  655442 default_sa.go:45] found service account: "default"
	I1019 12:52:07.480476  655442 default_sa.go:55] duration metric: took 3.861262ms for default service account to be created ...
	I1019 12:52:07.480487  655442 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:07.486365  655442 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:07.486398  655442 system_pods.go:89] "coredns-5dd5756b68-44mqv" [360fd17f-a1ea-4400-85fa-dd78ab44fcbc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:07.486411  655442 system_pods.go:89] "etcd-old-k8s-version-577062" [1561017e-3d8c-4abb-b580-ea4eac44212a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:07.486451  655442 system_pods.go:89] "kindnet-2h26b" [357fe2d6-42b8-4f53-aa84-9fde0f804ee8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:07.486463  655442 system_pods.go:89] "kube-apiserver-old-k8s-version-577062" [836bda6f-5d8c-4bbc-833c-c563da74cbbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:07.486472  655442 system_pods.go:89] "kube-controller-manager-old-k8s-version-577062" [444afdc9-ca27-4986-9684-d3b8c191a406] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:07.486482  655442 system_pods.go:89] "kube-proxy-lhths" [3dba9194-393b-4f18-a6e5-057bd803c642] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:07.486489  655442 system_pods.go:89] "kube-scheduler-old-k8s-version-577062" [12c61412-0e63-4451-8b6d-70992b408f0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:07.486497  655442 system_pods.go:89] "storage-provisioner" [f97edd8d-a3ad-4339-a4c6-99bc764b5534] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:07.486508  655442 system_pods.go:126] duration metric: took 6.013889ms to wait for k8s-apps to be running ...
	I1019 12:52:07.486519  655442 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:07.486570  655442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:07.506943  655442 system_svc.go:56] duration metric: took 20.409832ms WaitForService to wait for kubelet
	I1019 12:52:07.506976  655442 kubeadm.go:586] duration metric: took 3.680220176s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:07.506999  655442 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:07.510612  655442 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:07.510645  655442 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:07.510663  655442 node_conditions.go:105] duration metric: took 3.657575ms to run NodePressure ...
	I1019 12:52:07.510680  655442 start.go:241] waiting for startup goroutines ...
	I1019 12:52:07.510690  655442 start.go:246] waiting for cluster config update ...
	I1019 12:52:07.510708  655442 start.go:255] writing updated cluster config ...
	I1019 12:52:07.510964  655442 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:07.516010  655442 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:07.522233  655442 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-44mqv" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:52:09.527810  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:11.529100  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:08.618467  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	W1019 12:52:11.119035  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	I1019 12:52:11.071482  657553 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:52:11.075114  657553 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:52:11.075138  657553 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:52:11.075151  657553 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:52:11.075201  657553 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:52:11.075337  657553 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:52:11.075485  657553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:52:11.083624  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:11.101002  657553 start.go:296] duration metric: took 146.156667ms for postStartSetup
	I1019 12:52:11.101071  657553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:52:11.101106  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:11.119259  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:11.212600  657553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:52:11.217365  657553 fix.go:56] duration metric: took 4.857978374s for fixHost
	I1019 12:52:11.217392  657553 start.go:83] releasing machines lock for "no-preload-561408", held for 4.858027541s
	I1019 12:52:11.217484  657553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-561408
	I1019 12:52:11.235583  657553 ssh_runner.go:195] Run: cat /version.json
	I1019 12:52:11.235640  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:11.235690  657553 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:52:11.235743  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:11.254508  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:11.255069  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:11.402958  657553 ssh_runner.go:195] Run: systemctl --version
	I1019 12:52:11.409621  657553 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:52:11.444994  657553 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:52:11.449769  657553 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:52:11.449849  657553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:52:11.458371  657553 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:52:11.458399  657553 start.go:495] detecting cgroup driver to use...
	I1019 12:52:11.458447  657553 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:52:11.458504  657553 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:52:11.472650  657553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:52:11.484869  657553 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:52:11.484940  657553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:52:11.499590  657553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:52:11.512402  657553 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:52:11.596415  657553 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:52:11.679377  657553 docker.go:234] disabling docker service ...
	I1019 12:52:11.679480  657553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:52:11.694788  657553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:52:11.707377  657553 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:52:11.787388  657553 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:52:11.878281  657553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:52:11.890924  657553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:52:11.906692  657553 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:52:11.906751  657553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.915738  657553 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:52:11.915800  657553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.924468  657553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.933011  657553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.941471  657553 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:52:11.949504  657553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.958166  657553 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.966742  657553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.975718  657553 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:52:11.982855  657553 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:52:11.989970  657553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:12.071282  657553 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:52:12.181511  657553 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:52:12.181582  657553 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:52:12.185571  657553 start.go:563] Will wait 60s for crictl version
	I1019 12:52:12.185623  657553 ssh_runner.go:195] Run: which crictl
	I1019 12:52:12.189194  657553 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:52:12.214572  657553 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:52:12.214640  657553 ssh_runner.go:195] Run: crio --version
	I1019 12:52:12.242554  657553 ssh_runner.go:195] Run: crio --version
	I1019 12:52:12.272501  657553 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:52:12.273754  657553 cli_runner.go:164] Run: docker network inspect no-preload-561408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:52:12.291231  657553 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1019 12:52:12.295321  657553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:12.306201  657553 kubeadm.go:883] updating cluster {Name:no-preload-561408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-561408 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:52:12.306305  657553 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:12.306334  657553 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:12.338608  657553 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:12.338637  657553 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:52:12.338646  657553 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1019 12:52:12.338769  657553 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-561408 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-561408 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:52:12.338856  657553 ssh_runner.go:195] Run: crio config
	I1019 12:52:12.386474  657553 cni.go:84] Creating CNI manager for ""
	I1019 12:52:12.386501  657553 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:12.386523  657553 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:52:12.386564  657553 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-561408 NodeName:no-preload-561408 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:52:12.386734  657553 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-561408"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:52:12.386817  657553 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:52:12.395314  657553 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:52:12.395374  657553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:52:12.403260  657553 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 12:52:12.416734  657553 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:52:12.429722  657553 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1019 12:52:12.442398  657553 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:52:12.446399  657553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:12.457646  657553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:12.540361  657553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:12.562824  657553 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408 for IP: 192.168.94.2
	I1019 12:52:12.562847  657553 certs.go:195] generating shared ca certs ...
	I1019 12:52:12.562868  657553 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:12.563050  657553 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:52:12.563118  657553 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:52:12.563132  657553 certs.go:257] generating profile certs ...
	I1019 12:52:12.563244  657553 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/client.key
	I1019 12:52:12.563300  657553 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/apiserver.key.efacda45
	I1019 12:52:12.563355  657553 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/proxy-client.key
	I1019 12:52:12.563546  657553 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:52:12.563591  657553 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:52:12.563605  657553 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:52:12.563631  657553 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:52:12.563660  657553 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:52:12.563688  657553 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:52:12.563740  657553 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:12.564751  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:52:12.586015  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:52:12.605379  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:52:12.625545  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:52:12.650503  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 12:52:12.668376  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:52:12.686762  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:52:12.703747  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:52:12.721156  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:52:12.738772  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:52:12.757188  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:52:12.775463  657553 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:52:12.788297  657553 ssh_runner.go:195] Run: openssl version
	I1019 12:52:12.794682  657553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:52:12.803304  657553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:12.807123  657553 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:12.807184  657553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:12.843773  657553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:52:12.852395  657553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:52:12.860740  657553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:52:12.864509  657553 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:52:12.864565  657553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:52:12.902934  657553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:52:12.911575  657553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:52:12.920534  657553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:52:12.924261  657553 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:52:12.924318  657553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:52:12.959387  657553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:52:12.968395  657553 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:52:12.972181  657553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:52:13.007750  657553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:52:13.044824  657553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:52:13.093865  657553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:52:13.145444  657553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:52:13.203541  657553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:52:13.245821  657553 kubeadm.go:400] StartCluster: {Name:no-preload-561408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-561408 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:13.245933  657553 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:52:13.245992  657553 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:52:13.278578  657553 cri.go:89] found id: "6c259b4325350a6198e9a1d8d0eac556ea213104568525890a93d7a828893ce4"
	I1019 12:52:13.278602  657553 cri.go:89] found id: "f7b8547c0e92276ea4aa3de0d1355f2d469801e321a4bd5e24851ac65d15e3d7"
	I1019 12:52:13.278606  657553 cri.go:89] found id: "9090a5b4e67c95d31bf16d2ca089106db1a0761e43d712e00a8bf33bc963353d"
	I1019 12:52:13.278609  657553 cri.go:89] found id: "01ed9d93f2579a1ea122d6b57e30a1236b2a3f66e97860cfecc6148cae01a115"
	I1019 12:52:13.278612  657553 cri.go:89] found id: ""
	I1019 12:52:13.278651  657553 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:52:13.291369  657553 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:52:13Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:52:13.291463  657553 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:52:13.300232  657553 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:52:13.300255  657553 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:52:13.300304  657553 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:52:13.309450  657553 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:52:13.310797  657553 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-561408" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:13.312106  657553 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-351705/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-561408" cluster setting kubeconfig missing "no-preload-561408" context setting]
	I1019 12:52:13.313508  657553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:13.315974  657553 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:52:13.324849  657553 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1019 12:52:13.324885  657553 kubeadm.go:601] duration metric: took 24.623091ms to restartPrimaryControlPlane
	I1019 12:52:13.324895  657553 kubeadm.go:402] duration metric: took 79.087378ms to StartCluster
	I1019 12:52:13.324916  657553 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:13.324984  657553 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:13.327319  657553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:13.327622  657553 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:52:13.327716  657553 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:52:13.327817  657553 config.go:182] Loaded profile config "no-preload-561408": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:13.327820  657553 addons.go:69] Setting storage-provisioner=true in profile "no-preload-561408"
	I1019 12:52:13.327838  657553 addons.go:238] Setting addon storage-provisioner=true in "no-preload-561408"
	W1019 12:52:13.327850  657553 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:52:13.327864  657553 addons.go:69] Setting default-storageclass=true in profile "no-preload-561408"
	I1019 12:52:13.327879  657553 host.go:66] Checking if "no-preload-561408" exists ...
	I1019 12:52:13.327879  657553 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-561408"
	I1019 12:52:13.327870  657553 addons.go:69] Setting dashboard=true in profile "no-preload-561408"
	I1019 12:52:13.327986  657553 addons.go:238] Setting addon dashboard=true in "no-preload-561408"
	W1019 12:52:13.327997  657553 addons.go:247] addon dashboard should already be in state true
	I1019 12:52:13.328040  657553 host.go:66] Checking if "no-preload-561408" exists ...
	I1019 12:52:13.328147  657553 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:52:13.328299  657553 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:52:13.328630  657553 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:52:13.329270  657553 out.go:179] * Verifying Kubernetes components...
	I1019 12:52:13.330520  657553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:13.354584  657553 addons.go:238] Setting addon default-storageclass=true in "no-preload-561408"
	W1019 12:52:13.354606  657553 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:52:13.354636  657553 host.go:66] Checking if "no-preload-561408" exists ...
	I1019 12:52:13.355105  657553 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:52:13.355108  657553 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:52:13.355108  657553 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 12:52:13.356935  657553 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:13.356955  657553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:52:13.356975  657553 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 12:52:13.357007  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:13.358223  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 12:52:13.358242  657553 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:52:13.358298  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:13.391736  657553 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:13.391767  657553 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:13.391838  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:13.392176  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:13.392389  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:13.416463  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:13.489291  657553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:13.506209  657553 node_ready.go:35] waiting up to 6m0s for node "no-preload-561408" to be "Ready" ...
	I1019 12:52:13.507053  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:52:13.507078  657553 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:52:13.510575  657553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:13.521920  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:52:13.521943  657553 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:52:13.534827  657553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:13.541245  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:52:13.541269  657553 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:52:13.558572  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:52:13.558597  657553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:52:13.578361  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:52:13.578399  657553 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:52:13.592267  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:52:13.592294  657553 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:52:13.607060  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:52:13.607087  657553 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:52:13.621489  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:52:13.621511  657553 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:52:13.635208  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:13.635232  657553 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:52:13.647649  657553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:14.712976  657553 node_ready.go:49] node "no-preload-561408" is "Ready"
	I1019 12:52:14.713020  657553 node_ready.go:38] duration metric: took 1.20677668s for node "no-preload-561408" to be "Ready" ...
	I1019 12:52:14.713044  657553 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:14.713100  657553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:15.244946  657553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.734335605s)
	I1019 12:52:15.245021  657553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.710160862s)
	I1019 12:52:15.245115  657553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.59742692s)
	I1019 12:52:15.245136  657553 api_server.go:72] duration metric: took 1.917480531s to wait for apiserver process to appear ...
	I1019 12:52:15.245145  657553 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:15.245162  657553 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:52:15.246598  657553 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-561408 addons enable metrics-server
	
	I1019 12:52:15.249535  657553 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:15.249558  657553 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:15.252364  657553 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1019 12:52:12.681796  651601 node_ready.go:57] node "default-k8s-diff-port-999693" has "Ready":"False" status (will retry)
	W1019 12:52:15.182126  651601 node_ready.go:57] node "default-k8s-diff-port-999693" has "Ready":"False" status (will retry)
	I1019 12:52:15.253341  657553 addons.go:514] duration metric: took 1.925639227s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 12:52:15.745567  657553 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:52:15.750652  657553 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:15.750680  657553 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:13.530582  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:16.029365  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:13.618473  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	I1019 12:52:16.118349  641657 node_ready.go:49] node "embed-certs-123864" is "Ready"
	I1019 12:52:16.118385  641657 node_ready.go:38] duration metric: took 41.00326347s for node "embed-certs-123864" to be "Ready" ...
	I1019 12:52:16.118405  641657 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:16.118476  641657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:16.131836  641657 api_server.go:72] duration metric: took 41.609178423s to wait for apiserver process to appear ...
	I1019 12:52:16.131860  641657 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:16.131881  641657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:16.137339  641657 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 12:52:16.138264  641657 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:16.138287  641657 api_server.go:131] duration metric: took 6.421314ms to wait for apiserver health ...
	I1019 12:52:16.138295  641657 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:16.141339  641657 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:16.141370  641657 system_pods.go:61] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:16.141376  641657 system_pods.go:61] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running
	I1019 12:52:16.141383  641657 system_pods.go:61] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running
	I1019 12:52:16.141390  641657 system_pods.go:61] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running
	I1019 12:52:16.141398  641657 system_pods.go:61] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running
	I1019 12:52:16.141401  641657 system_pods.go:61] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running
	I1019 12:52:16.141405  641657 system_pods.go:61] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running
	I1019 12:52:16.141410  641657 system_pods.go:61] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:16.141416  641657 system_pods.go:74] duration metric: took 3.117331ms to wait for pod list to return data ...
	I1019 12:52:16.141466  641657 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:16.143704  641657 default_sa.go:45] found service account: "default"
	I1019 12:52:16.143719  641657 default_sa.go:55] duration metric: took 2.248215ms for default service account to be created ...
	I1019 12:52:16.143726  641657 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:16.146129  641657 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:16.146153  641657 system_pods.go:89] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:16.146158  641657 system_pods.go:89] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running
	I1019 12:52:16.146164  641657 system_pods.go:89] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running
	I1019 12:52:16.146167  641657 system_pods.go:89] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running
	I1019 12:52:16.146172  641657 system_pods.go:89] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running
	I1019 12:52:16.146175  641657 system_pods.go:89] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running
	I1019 12:52:16.146179  641657 system_pods.go:89] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running
	I1019 12:52:16.146184  641657 system_pods.go:89] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:16.146203  641657 retry.go:31] will retry after 285.400832ms: missing components: kube-dns
	I1019 12:52:16.436535  641657 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:16.436567  641657 system_pods.go:89] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:16.436572  641657 system_pods.go:89] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running
	I1019 12:52:16.436580  641657 system_pods.go:89] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running
	I1019 12:52:16.436584  641657 system_pods.go:89] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running
	I1019 12:52:16.436588  641657 system_pods.go:89] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running
	I1019 12:52:16.436592  641657 system_pods.go:89] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running
	I1019 12:52:16.436595  641657 system_pods.go:89] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running
	I1019 12:52:16.436599  641657 system_pods.go:89] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:16.436615  641657 retry.go:31] will retry after 310.044699ms: missing components: kube-dns
	I1019 12:52:16.750571  641657 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:16.750602  641657 system_pods.go:89] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Running
	I1019 12:52:16.750611  641657 system_pods.go:89] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running
	I1019 12:52:16.750616  641657 system_pods.go:89] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running
	I1019 12:52:16.750622  641657 system_pods.go:89] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running
	I1019 12:52:16.750627  641657 system_pods.go:89] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running
	I1019 12:52:16.750631  641657 system_pods.go:89] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running
	I1019 12:52:16.750636  641657 system_pods.go:89] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running
	I1019 12:52:16.750641  641657 system_pods.go:89] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Running
	I1019 12:52:16.750650  641657 system_pods.go:126] duration metric: took 606.917887ms to wait for k8s-apps to be running ...
	I1019 12:52:16.750663  641657 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:16.750723  641657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:16.764587  641657 system_svc.go:56] duration metric: took 13.912641ms WaitForService to wait for kubelet
	I1019 12:52:16.764619  641657 kubeadm.go:586] duration metric: took 42.241965825s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:16.764646  641657 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:16.767727  641657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:16.767757  641657 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:16.767773  641657 node_conditions.go:105] duration metric: took 3.120512ms to run NodePressure ...
	I1019 12:52:16.767786  641657 start.go:241] waiting for startup goroutines ...
	I1019 12:52:16.767800  641657 start.go:246] waiting for cluster config update ...
	I1019 12:52:16.767814  641657 start.go:255] writing updated cluster config ...
	I1019 12:52:16.768149  641657 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:16.773114  641657 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:16.777330  641657 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:16.782062  641657 pod_ready.go:94] pod "coredns-66bc5c9577-bw9l4" is "Ready"
	I1019 12:52:16.782086  641657 pod_ready.go:86] duration metric: took 4.735811ms for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:16.784129  641657 pod_ready.go:83] waiting for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:16.788298  641657 pod_ready.go:94] pod "etcd-embed-certs-123864" is "Ready"
	I1019 12:52:16.788321  641657 pod_ready.go:86] duration metric: took 4.171088ms for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:16.790285  641657 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:16.794219  641657 pod_ready.go:94] pod "kube-apiserver-embed-certs-123864" is "Ready"
	I1019 12:52:16.794240  641657 pod_ready.go:86] duration metric: took 3.934609ms for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:16.796138  641657 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:17.178090  641657 pod_ready.go:94] pod "kube-controller-manager-embed-certs-123864" is "Ready"
	I1019 12:52:17.178123  641657 pod_ready.go:86] duration metric: took 381.961365ms for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:17.378373  641657 pod_ready.go:83] waiting for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:17.778483  641657 pod_ready.go:94] pod "kube-proxy-gvrcz" is "Ready"
	I1019 12:52:17.778513  641657 pod_ready.go:86] duration metric: took 400.113683ms for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:17.977212  641657 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.378053  641657 pod_ready.go:94] pod "kube-scheduler-embed-certs-123864" is "Ready"
	I1019 12:52:18.378084  641657 pod_ready.go:86] duration metric: took 400.844139ms for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.378100  641657 pod_ready.go:40] duration metric: took 1.604950114s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:18.430990  641657 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:52:18.432726  641657 out.go:179] * Done! kubectl is now configured to use "embed-certs-123864" cluster and "default" namespace by default
	W1019 12:52:18.447296  641657 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 3dba5214-9c83-4eaa-8310-4210b4c1a3c4
	I1019 12:52:17.681389  651601 node_ready.go:49] node "default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:17.681417  651601 node_ready.go:38] duration metric: took 11.503278969s for node "default-k8s-diff-port-999693" to be "Ready" ...
	I1019 12:52:17.681450  651601 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:17.681503  651601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:17.693486  651601 api_server.go:72] duration metric: took 11.937941722s to wait for apiserver process to appear ...
	I1019 12:52:17.693515  651601 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:17.693535  651601 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:17.697731  651601 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1019 12:52:17.698697  651601 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:17.698719  651601 api_server.go:131] duration metric: took 5.196854ms to wait for apiserver health ...
	I1019 12:52:17.698726  651601 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:17.701803  651601 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:17.701832  651601 system_pods.go:61] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:17.701837  651601 system_pods.go:61] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running
	I1019 12:52:17.701843  651601 system_pods.go:61] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:17.701846  651601 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running
	I1019 12:52:17.701850  651601 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running
	I1019 12:52:17.701857  651601 system_pods.go:61] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:17.701860  651601 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running
	I1019 12:52:17.701875  651601 system_pods.go:61] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:17.701884  651601 system_pods.go:74] duration metric: took 3.152261ms to wait for pod list to return data ...
	I1019 12:52:17.701891  651601 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:17.704119  651601 default_sa.go:45] found service account: "default"
	I1019 12:52:17.704135  651601 default_sa.go:55] duration metric: took 2.239807ms for default service account to be created ...
	I1019 12:52:17.704143  651601 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:17.706834  651601 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:17.706868  651601 system_pods.go:89] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:17.706875  651601 system_pods.go:89] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running
	I1019 12:52:17.706882  651601 system_pods.go:89] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:17.706886  651601 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running
	I1019 12:52:17.706889  651601 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running
	I1019 12:52:17.706892  651601 system_pods.go:89] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:17.706895  651601 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running
	I1019 12:52:17.706899  651601 system_pods.go:89] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:17.706920  651601 retry.go:31] will retry after 307.814167ms: missing components: kube-dns
	I1019 12:52:18.019475  651601 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:18.019507  651601 system_pods.go:89] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:18.019513  651601 system_pods.go:89] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running
	I1019 12:52:18.019519  651601 system_pods.go:89] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:18.019522  651601 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running
	I1019 12:52:18.019527  651601 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running
	I1019 12:52:18.019532  651601 system_pods.go:89] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:18.019545  651601 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running
	I1019 12:52:18.019556  651601 system_pods.go:89] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:18.019575  651601 retry.go:31] will retry after 347.626292ms: missing components: kube-dns
	I1019 12:52:18.371957  651601 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:18.371992  651601 system_pods.go:89] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Running
	I1019 12:52:18.372000  651601 system_pods.go:89] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running
	I1019 12:52:18.372011  651601 system_pods.go:89] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:18.372017  651601 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running
	I1019 12:52:18.372022  651601 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running
	I1019 12:52:18.372027  651601 system_pods.go:89] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:18.372032  651601 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running
	I1019 12:52:18.372037  651601 system_pods.go:89] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Running
	I1019 12:52:18.372049  651601 system_pods.go:126] duration metric: took 667.899222ms to wait for k8s-apps to be running ...
	I1019 12:52:18.372064  651601 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:18.372120  651601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:18.387272  651601 system_svc.go:56] duration metric: took 15.199578ms WaitForService to wait for kubelet
	I1019 12:52:18.387298  651601 kubeadm.go:586] duration metric: took 12.63176127s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:18.387320  651601 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:18.390760  651601 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:18.390792  651601 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:18.390810  651601 node_conditions.go:105] duration metric: took 3.483692ms to run NodePressure ...
	I1019 12:52:18.390827  651601 start.go:241] waiting for startup goroutines ...
	I1019 12:52:18.390837  651601 start.go:246] waiting for cluster config update ...
	I1019 12:52:18.390851  651601 start.go:255] writing updated cluster config ...
	I1019 12:52:18.391134  651601 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:18.395142  651601 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:18.399443  651601 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.403935  651601 pod_ready.go:94] pod "coredns-66bc5c9577-hftjp" is "Ready"
	I1019 12:52:18.403962  651601 pod_ready.go:86] duration metric: took 4.493999ms for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.405940  651601 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.410036  651601 pod_ready.go:94] pod "etcd-default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:18.410058  651601 pod_ready.go:86] duration metric: took 4.097261ms for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.412299  651601 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.416083  651601 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:18.416102  651601 pod_ready.go:86] duration metric: took 3.780007ms for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.418113  651601 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.800332  651601 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:18.800368  651601 pod_ready.go:86] duration metric: took 382.232068ms for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:19.001010  651601 pod_ready.go:83] waiting for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:19.399840  651601 pod_ready.go:94] pod "kube-proxy-cjxjt" is "Ready"
	I1019 12:52:19.399867  651601 pod_ready.go:86] duration metric: took 398.825641ms for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:19.600330  651601 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:19.999629  651601 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:19.999672  651601 pod_ready.go:86] duration metric: took 399.317944ms for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:19.999688  651601 pod_ready.go:40] duration metric: took 1.604518436s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:20.061915  651601 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:52:20.064494  651601 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-999693" cluster and "default" namespace by default
	I1019 12:52:16.246140  657553 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:52:16.251353  657553 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1019 12:52:16.252365  657553 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:16.252392  657553 api_server.go:131] duration metric: took 1.007242213s to wait for apiserver health ...
	I1019 12:52:16.252404  657553 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:16.255472  657553 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:16.255505  657553 system_pods.go:61] "coredns-66bc5c9577-pgxlp" [af0816b7-b4de-4d64-a4bb-0efbc821bb53] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:16.255515  657553 system_pods.go:61] "etcd-no-preload-561408" [0d036058-49c8-4176-b416-ed28089e7035] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:16.255536  657553 system_pods.go:61] "kindnet-kq4cq" [1e5712d3-d393-4b98-8346-442229d87b07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:16.255549  657553 system_pods.go:61] "kube-apiserver-no-preload-561408" [83625aff-bb50-4376-b99f-b4a252a21b0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:16.255559  657553 system_pods.go:61] "kube-controller-manager-no-preload-561408" [da4db941-5094-47df-9cdf-ace923ff41ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:16.255567  657553 system_pods.go:61] "kube-proxy-lppwp" [cf6aee53-b434-4009-aeb6-36cb62fc0769] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:16.255580  657553 system_pods.go:61] "kube-scheduler-no-preload-561408" [55552cd1-c6f1-4b76-9b51-c78a1c7aac05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:16.255588  657553 system_pods.go:61] "storage-provisioner" [e8c92cd5-cb77-4b3d-bc5a-20b606b8794d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:16.255600  657553 system_pods.go:74] duration metric: took 3.184234ms to wait for pod list to return data ...
	I1019 12:52:16.255612  657553 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:16.257684  657553 default_sa.go:45] found service account: "default"
	I1019 12:52:16.257703  657553 default_sa.go:55] duration metric: took 2.081404ms for default service account to be created ...
	I1019 12:52:16.257712  657553 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:16.260072  657553 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:16.260095  657553 system_pods.go:89] "coredns-66bc5c9577-pgxlp" [af0816b7-b4de-4d64-a4bb-0efbc821bb53] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:16.260103  657553 system_pods.go:89] "etcd-no-preload-561408" [0d036058-49c8-4176-b416-ed28089e7035] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:16.260110  657553 system_pods.go:89] "kindnet-kq4cq" [1e5712d3-d393-4b98-8346-442229d87b07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:16.260116  657553 system_pods.go:89] "kube-apiserver-no-preload-561408" [83625aff-bb50-4376-b99f-b4a252a21b0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:16.260121  657553 system_pods.go:89] "kube-controller-manager-no-preload-561408" [da4db941-5094-47df-9cdf-ace923ff41ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:16.260142  657553 system_pods.go:89] "kube-proxy-lppwp" [cf6aee53-b434-4009-aeb6-36cb62fc0769] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:16.260159  657553 system_pods.go:89] "kube-scheduler-no-preload-561408" [55552cd1-c6f1-4b76-9b51-c78a1c7aac05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:16.260167  657553 system_pods.go:89] "storage-provisioner" [e8c92cd5-cb77-4b3d-bc5a-20b606b8794d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:16.260179  657553 system_pods.go:126] duration metric: took 2.461251ms to wait for k8s-apps to be running ...
	I1019 12:52:16.260192  657553 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:16.260244  657553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:16.273038  657553 system_svc.go:56] duration metric: took 12.840667ms WaitForService to wait for kubelet
	I1019 12:52:16.273061  657553 kubeadm.go:586] duration metric: took 2.945407167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:16.273089  657553 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:16.275467  657553 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:16.275490  657553 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:16.275504  657553 node_conditions.go:105] duration metric: took 2.40634ms to run NodePressure ...
	I1019 12:52:16.275519  657553 start.go:241] waiting for startup goroutines ...
	I1019 12:52:16.275529  657553 start.go:246] waiting for cluster config update ...
	I1019 12:52:16.275539  657553 start.go:255] writing updated cluster config ...
	I1019 12:52:16.275817  657553 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:16.279651  657553 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:16.282937  657553 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pgxlp" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:52:18.288317  657553 pod_ready.go:104] pod "coredns-66bc5c9577-pgxlp" is not "Ready", error: <nil>
	W1019 12:52:20.289843  657553 pod_ready.go:104] pod "coredns-66bc5c9577-pgxlp" is not "Ready", error: <nil>
	W1019 12:52:18.530110  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:21.029832  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:22.290218  657553 pod_ready.go:104] pod "coredns-66bc5c9577-pgxlp" is not "Ready", error: <nil>
	W1019 12:52:24.819087  657553 pod_ready.go:104] pod "coredns-66bc5c9577-pgxlp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 19 12:52:16 embed-certs-123864 crio[774]: time="2025-10-19T12:52:16.345627681Z" level=info msg="Starting container: 4cf5f6ae2670a8c25a96332744ab6eeb0281dcb93e3b7c22de6df477ad0934bd" id=5801e778-4e17-404a-8113-05ada3c0fe4a name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:52:16 embed-certs-123864 crio[774]: time="2025-10-19T12:52:16.347392611Z" level=info msg="Started container" PID=1859 containerID=4cf5f6ae2670a8c25a96332744ab6eeb0281dcb93e3b7c22de6df477ad0934bd description=kube-system/coredns-66bc5c9577-bw9l4/coredns id=5801e778-4e17-404a-8113-05ada3c0fe4a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9a68304266321d06a92a569900843df5377c0d4dc8a972b2a1ce923a6a1d31b
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.892265673Z" level=info msg="Running pod sandbox: default/busybox/POD" id=46d9eaec-f5e0-4612-af61-fc4943cf4428 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.892380346Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.898043112Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f23a332f26947aed3349a85a34945965dbd3d8106ae4415bb28be7613b84fcc5 UID:113fedc6-dd5a-4b53-873c-ed685ea5ed9c NetNS:/var/run/netns/fc62ef1a-c259-4eb2-a513-bcfed49b513a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b3e8}] Aliases:map[]}"
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.898071826Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.908807085Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f23a332f26947aed3349a85a34945965dbd3d8106ae4415bb28be7613b84fcc5 UID:113fedc6-dd5a-4b53-873c-ed685ea5ed9c NetNS:/var/run/netns/fc62ef1a-c259-4eb2-a513-bcfed49b513a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b3e8}] Aliases:map[]}"
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.908945701Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.909689139Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.910489027Z" level=info msg="Ran pod sandbox f23a332f26947aed3349a85a34945965dbd3d8106ae4415bb28be7613b84fcc5 with infra container: default/busybox/POD" id=46d9eaec-f5e0-4612-af61-fc4943cf4428 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.911833063Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99cd491d-1625-43c4-b384-bbd5b9be7ff1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.911980838Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=99cd491d-1625-43c4-b384-bbd5b9be7ff1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.912037142Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=99cd491d-1625-43c4-b384-bbd5b9be7ff1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.912840189Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4c1760a5-a4cd-4537-bdc8-a1988410d754 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:52:18 embed-certs-123864 crio[774]: time="2025-10-19T12:52:18.917379697Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 12:52:19 embed-certs-123864 crio[774]: time="2025-10-19T12:52:19.631657999Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=4c1760a5-a4cd-4537-bdc8-a1988410d754 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:52:19 embed-certs-123864 crio[774]: time="2025-10-19T12:52:19.63248958Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=33c63734-31d1-4f31-a390-fc2cd14197da name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:19 embed-certs-123864 crio[774]: time="2025-10-19T12:52:19.63407449Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7082c4d5-b84a-491d-8e20-f6707af982ec name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:19 embed-certs-123864 crio[774]: time="2025-10-19T12:52:19.63714918Z" level=info msg="Creating container: default/busybox/busybox" id=8ced0be8-a726-41d8-bd09-58a77c7fbbd4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:19 embed-certs-123864 crio[774]: time="2025-10-19T12:52:19.63781637Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:19 embed-certs-123864 crio[774]: time="2025-10-19T12:52:19.641135294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:19 embed-certs-123864 crio[774]: time="2025-10-19T12:52:19.641550876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:19 embed-certs-123864 crio[774]: time="2025-10-19T12:52:19.666351247Z" level=info msg="Created container f393d90d899b420acc5f8be6a0f934250d4499fa7ed7e70a2cc9ddc43119572b: default/busybox/busybox" id=8ced0be8-a726-41d8-bd09-58a77c7fbbd4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:19 embed-certs-123864 crio[774]: time="2025-10-19T12:52:19.66701978Z" level=info msg="Starting container: f393d90d899b420acc5f8be6a0f934250d4499fa7ed7e70a2cc9ddc43119572b" id=73f44791-0e24-4947-852c-a5efb4fd64a3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:52:19 embed-certs-123864 crio[774]: time="2025-10-19T12:52:19.669049791Z" level=info msg="Started container" PID=1937 containerID=f393d90d899b420acc5f8be6a0f934250d4499fa7ed7e70a2cc9ddc43119572b description=default/busybox/busybox id=73f44791-0e24-4947-852c-a5efb4fd64a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f23a332f26947aed3349a85a34945965dbd3d8106ae4415bb28be7613b84fcc5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	f393d90d899b4       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago        Running             busybox                   0                   f23a332f26947       busybox                                      default
	4cf5f6ae2670a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 seconds ago       Running             coredns                   0                   b9a6830426632       coredns-66bc5c9577-bw9l4                     kube-system
	bb10614317719       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago       Running             storage-provisioner       0                   f3a539cd9c707       storage-provisioner                          kube-system
	bc1ce2f2ca354       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      51 seconds ago       Running             kube-proxy                0                   4b0fb97aa48c9       kube-proxy-gvrcz                             kube-system
	fe1877fa9caaa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      51 seconds ago       Running             kindnet-cni               0                   b2f088a0734e6       kindnet-zkvs7                                kube-system
	ec0992273caf1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   67f4c1b82773c       kube-controller-manager-embed-certs-123864   kube-system
	679c9c4e76b3a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   407ec7899dd14       kube-apiserver-embed-certs-123864            kube-system
	ebcc0d11a57cf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   5a9cc4467a838       kube-scheduler-embed-certs-123864            kube-system
	a5c01483a5fe0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   ce03243e7dddb       etcd-embed-certs-123864                      kube-system
	
	
	==> coredns [4cf5f6ae2670a8c25a96332744ab6eeb0281dcb93e3b7c22de6df477ad0934bd] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37466 - 51057 "HINFO IN 6168180580566772034.6811128731363353675. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.496389876s
	
	
	==> describe nodes <==
	Name:               embed-certs-123864
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-123864
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=embed-certs-123864
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_51_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:51:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-123864
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:52:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:52:15 +0000   Sun, 19 Oct 2025 12:51:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:52:15 +0000   Sun, 19 Oct 2025 12:51:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:52:15 +0000   Sun, 19 Oct 2025 12:51:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:52:15 +0000   Sun, 19 Oct 2025 12:52:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-123864
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                487d540e-33e7-428f-8d26-3b1ead032aff
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-bw9l4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     52s
	  kube-system                 etcd-embed-certs-123864                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         60s
	  kube-system                 kindnet-zkvs7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      53s
	  kube-system                 kube-apiserver-embed-certs-123864             250m (3%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-controller-manager-embed-certs-123864    200m (2%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-proxy-gvrcz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-embed-certs-123864             100m (1%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 51s   kube-proxy       
	  Normal  Starting                 58s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s   kubelet          Node embed-certs-123864 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s   kubelet          Node embed-certs-123864 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s   kubelet          Node embed-certs-123864 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s   node-controller  Node embed-certs-123864 event: Registered Node embed-certs-123864 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-123864 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [a5c01483a5fe06d00c8281165339d47f736c7c94f2ab2fdbe492f5c07b36f3ad] <==
	{"level":"warn","ts":"2025-10-19T12:51:26.122383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.132348Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.139701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.146289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.152502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.159770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.166161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.174291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.181754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.189273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.200850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.207253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.215210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.222579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.229960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.237264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.244594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.252003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.259831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.266976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.273598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.286892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.294515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.302850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:26.375588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40182","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:52:27 up  2:34,  0 user,  load average: 6.12, 5.08, 3.12
	Linux embed-certs-123864 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fe1877fa9caaa5453a324298b666d324a6bdeeac3a1aac07a667c425387e2a94] <==
	I1019 12:51:35.217555       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:51:35.217864       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 12:51:35.218065       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:51:35.218087       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:51:35.218102       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:51:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:51:35.516501       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:51:35.516572       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:51:35.516590       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:51:35.516779       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 12:52:05.418624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1019 12:52:05.418624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 12:52:05.418629       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 12:52:05.418629       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1019 12:52:07.016828       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:52:07.016863       1 metrics.go:72] Registering metrics
	I1019 12:52:07.016942       1 controller.go:711] "Syncing nftables rules"
	I1019 12:52:15.423601       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 12:52:15.423652       1 main.go:301] handling current node
	I1019 12:52:25.420505       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 12:52:25.420552       1 main.go:301] handling current node
	
	
	==> kube-apiserver [679c9c4e76b3ac6561075496154bd8e59ff9c2d5c7ff00b8acc012beda3c5068] <==
	E1019 12:51:26.987374       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1019 12:51:27.002859       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:51:27.022329       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 12:51:27.022381       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:51:27.030002       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:51:27.031495       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:51:27.193504       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:51:27.805763       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 12:51:27.810215       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 12:51:27.810233       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:51:28.350768       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:51:28.385478       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:51:28.511661       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 12:51:28.519159       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1019 12:51:28.520585       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:51:28.524951       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:51:28.853028       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:51:29.491554       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:51:29.502920       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 12:51:29.512005       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 12:51:34.611271       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1019 12:51:34.669786       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:51:34.714622       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:51:34.723986       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1019 12:52:25.687107       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:39552: use of closed network connection
	
	
	==> kube-controller-manager [ec0992273caf123b7142de546faa72940ce502548f48f4b783d32b49c624c62a] <==
	I1019 12:51:33.852599       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 12:51:33.852631       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 12:51:33.852650       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 12:51:33.852656       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:51:33.852814       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 12:51:33.852837       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 12:51:33.853344       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:51:33.856539       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 12:51:33.853906       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 12:51:33.854186       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 12:51:33.854202       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 12:51:33.855675       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 12:51:33.861501       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:51:33.861582       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 12:51:33.861677       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 12:51:33.862403       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 12:51:33.862548       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 12:51:33.862558       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 12:51:33.866200       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 12:51:33.872672       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-123864" podCIDRs=["10.244.0.0/24"]
	I1019 12:51:33.873119       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 12:51:33.883509       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:51:33.885627       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 12:51:33.891927       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:52:18.861034       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bc1ce2f2ca354002a021aef02edd231cab463864cdccb8625709016753cddc1e] <==
	I1019 12:51:35.064241       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:51:35.135232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:51:35.235387       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:51:35.235466       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 12:51:35.235599       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:51:35.255241       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:51:35.255323       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:51:35.261018       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:51:35.261459       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:51:35.261500       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:51:35.263132       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:51:35.263160       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:51:35.263183       1 config.go:200] "Starting service config controller"
	I1019 12:51:35.263189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:51:35.263207       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:51:35.263213       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:51:35.263246       1 config.go:309] "Starting node config controller"
	I1019 12:51:35.263262       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:51:35.263271       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:51:35.363895       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:51:35.363916       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:51:35.363965       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ebcc0d11a57cfb8c488f575aad2787b6a6b3650f39e8efff0d77c792ef663ea5] <==
	E1019 12:51:26.865913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 12:51:26.865962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:51:26.865976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:51:26.866028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:51:26.866036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:51:26.866080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:51:26.866093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:51:27.757147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:51:27.764861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:51:27.765730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:51:27.793691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:51:27.805052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:51:27.816484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:51:27.850289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:51:27.873948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:51:27.896200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:51:27.906373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:51:28.002027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:51:28.052901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:51:28.072591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:51:28.091609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:51:28.115518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:51:28.116069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:51:28.252747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1019 12:51:30.763031       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:51:30 embed-certs-123864 kubelet[1338]: I1019 12:51:30.417726    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-123864" podStartSLOduration=1.41770393 podStartE2EDuration="1.41770393s" podCreationTimestamp="2025-10-19 12:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:30.417227606 +0000 UTC m=+1.154217837" watchObservedRunningTime="2025-10-19 12:51:30.41770393 +0000 UTC m=+1.154694121"
	Oct 19 12:51:30 embed-certs-123864 kubelet[1338]: I1019 12:51:30.440262    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-123864" podStartSLOduration=1.440241562 podStartE2EDuration="1.440241562s" podCreationTimestamp="2025-10-19 12:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:30.427888664 +0000 UTC m=+1.164878873" watchObservedRunningTime="2025-10-19 12:51:30.440241562 +0000 UTC m=+1.177231772"
	Oct 19 12:51:30 embed-certs-123864 kubelet[1338]: I1019 12:51:30.440487    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-123864" podStartSLOduration=1.440476444 podStartE2EDuration="1.440476444s" podCreationTimestamp="2025-10-19 12:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:30.439792056 +0000 UTC m=+1.176782266" watchObservedRunningTime="2025-10-19 12:51:30.440476444 +0000 UTC m=+1.177466654"
	Oct 19 12:51:30 embed-certs-123864 kubelet[1338]: I1019 12:51:30.459660    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-123864" podStartSLOduration=3.459635782 podStartE2EDuration="3.459635782s" podCreationTimestamp="2025-10-19 12:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:30.450178842 +0000 UTC m=+1.187169054" watchObservedRunningTime="2025-10-19 12:51:30.459635782 +0000 UTC m=+1.196625996"
	Oct 19 12:51:33 embed-certs-123864 kubelet[1338]: I1019 12:51:33.901722    1338 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 12:51:33 embed-certs-123864 kubelet[1338]: I1019 12:51:33.902516    1338 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 12:51:34 embed-certs-123864 kubelet[1338]: I1019 12:51:34.684473    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zgb6\" (UniqueName: \"kubernetes.io/projected/39c8c6a5-3b67-4e28-895b-65d5e43fbc5c-kube-api-access-2zgb6\") pod \"kindnet-zkvs7\" (UID: \"39c8c6a5-3b67-4e28-895b-65d5e43fbc5c\") " pod="kube-system/kindnet-zkvs7"
	Oct 19 12:51:34 embed-certs-123864 kubelet[1338]: I1019 12:51:34.684777    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxwcm\" (UniqueName: \"kubernetes.io/projected/3b96feeb-3261-4834-945d-8e8048490377-kube-api-access-kxwcm\") pod \"kube-proxy-gvrcz\" (UID: \"3b96feeb-3261-4834-945d-8e8048490377\") " pod="kube-system/kube-proxy-gvrcz"
	Oct 19 12:51:34 embed-certs-123864 kubelet[1338]: I1019 12:51:34.684945    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/39c8c6a5-3b67-4e28-895b-65d5e43fbc5c-xtables-lock\") pod \"kindnet-zkvs7\" (UID: \"39c8c6a5-3b67-4e28-895b-65d5e43fbc5c\") " pod="kube-system/kindnet-zkvs7"
	Oct 19 12:51:34 embed-certs-123864 kubelet[1338]: I1019 12:51:34.684974    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/39c8c6a5-3b67-4e28-895b-65d5e43fbc5c-lib-modules\") pod \"kindnet-zkvs7\" (UID: \"39c8c6a5-3b67-4e28-895b-65d5e43fbc5c\") " pod="kube-system/kindnet-zkvs7"
	Oct 19 12:51:34 embed-certs-123864 kubelet[1338]: I1019 12:51:34.685113    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3b96feeb-3261-4834-945d-8e8048490377-kube-proxy\") pod \"kube-proxy-gvrcz\" (UID: \"3b96feeb-3261-4834-945d-8e8048490377\") " pod="kube-system/kube-proxy-gvrcz"
	Oct 19 12:51:34 embed-certs-123864 kubelet[1338]: I1019 12:51:34.685139    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b96feeb-3261-4834-945d-8e8048490377-xtables-lock\") pod \"kube-proxy-gvrcz\" (UID: \"3b96feeb-3261-4834-945d-8e8048490377\") " pod="kube-system/kube-proxy-gvrcz"
	Oct 19 12:51:34 embed-certs-123864 kubelet[1338]: I1019 12:51:34.685608    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b96feeb-3261-4834-945d-8e8048490377-lib-modules\") pod \"kube-proxy-gvrcz\" (UID: \"3b96feeb-3261-4834-945d-8e8048490377\") " pod="kube-system/kube-proxy-gvrcz"
	Oct 19 12:51:34 embed-certs-123864 kubelet[1338]: I1019 12:51:34.685654    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/39c8c6a5-3b67-4e28-895b-65d5e43fbc5c-cni-cfg\") pod \"kindnet-zkvs7\" (UID: \"39c8c6a5-3b67-4e28-895b-65d5e43fbc5c\") " pod="kube-system/kindnet-zkvs7"
	Oct 19 12:51:35 embed-certs-123864 kubelet[1338]: I1019 12:51:35.419461    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zkvs7" podStartSLOduration=1.419401736 podStartE2EDuration="1.419401736s" podCreationTimestamp="2025-10-19 12:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:35.418835535 +0000 UTC m=+6.155825770" watchObservedRunningTime="2025-10-19 12:51:35.419401736 +0000 UTC m=+6.156391947"
	Oct 19 12:51:36 embed-certs-123864 kubelet[1338]: I1019 12:51:36.109525    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gvrcz" podStartSLOduration=2.10950013 podStartE2EDuration="2.10950013s" podCreationTimestamp="2025-10-19 12:51:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:51:35.430598547 +0000 UTC m=+6.167588759" watchObservedRunningTime="2025-10-19 12:51:36.10950013 +0000 UTC m=+6.846490334"
	Oct 19 12:52:15 embed-certs-123864 kubelet[1338]: I1019 12:52:15.955855    1338 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 12:52:16 embed-certs-123864 kubelet[1338]: I1019 12:52:16.096686    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/155bf170-e0c9-4cbb-a5a8-3210902a76d0-config-volume\") pod \"coredns-66bc5c9577-bw9l4\" (UID: \"155bf170-e0c9-4cbb-a5a8-3210902a76d0\") " pod="kube-system/coredns-66bc5c9577-bw9l4"
	Oct 19 12:52:16 embed-certs-123864 kubelet[1338]: I1019 12:52:16.096732    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4r9z\" (UniqueName: \"kubernetes.io/projected/155bf170-e0c9-4cbb-a5a8-3210902a76d0-kube-api-access-b4r9z\") pod \"coredns-66bc5c9577-bw9l4\" (UID: \"155bf170-e0c9-4cbb-a5a8-3210902a76d0\") " pod="kube-system/coredns-66bc5c9577-bw9l4"
	Oct 19 12:52:16 embed-certs-123864 kubelet[1338]: I1019 12:52:16.096771    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjtb\" (UniqueName: \"kubernetes.io/projected/55836f6b-0761-4d80-9bb6-6b937954a401-kube-api-access-qkjtb\") pod \"storage-provisioner\" (UID: \"55836f6b-0761-4d80-9bb6-6b937954a401\") " pod="kube-system/storage-provisioner"
	Oct 19 12:52:16 embed-certs-123864 kubelet[1338]: I1019 12:52:16.096870    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/55836f6b-0761-4d80-9bb6-6b937954a401-tmp\") pod \"storage-provisioner\" (UID: \"55836f6b-0761-4d80-9bb6-6b937954a401\") " pod="kube-system/storage-provisioner"
	Oct 19 12:52:16 embed-certs-123864 kubelet[1338]: I1019 12:52:16.521980    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bw9l4" podStartSLOduration=41.521954423 podStartE2EDuration="41.521954423s" podCreationTimestamp="2025-10-19 12:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:52:16.512463404 +0000 UTC m=+47.249453614" watchObservedRunningTime="2025-10-19 12:52:16.521954423 +0000 UTC m=+47.258944634"
	Oct 19 12:52:16 embed-certs-123864 kubelet[1338]: I1019 12:52:16.534853    1338 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.5348281 podStartE2EDuration="41.5348281s" podCreationTimestamp="2025-10-19 12:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:52:16.52238211 +0000 UTC m=+47.259372316" watchObservedRunningTime="2025-10-19 12:52:16.5348281 +0000 UTC m=+47.271818311"
	Oct 19 12:52:18 embed-certs-123864 kubelet[1338]: I1019 12:52:18.710156    1338 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq4tb\" (UniqueName: \"kubernetes.io/projected/113fedc6-dd5a-4b53-873c-ed685ea5ed9c-kube-api-access-hq4tb\") pod \"busybox\" (UID: \"113fedc6-dd5a-4b53-873c-ed685ea5ed9c\") " pod="default/busybox"
	Oct 19 12:52:25 embed-certs-123864 kubelet[1338]: E1019 12:52:25.687067    1338 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51210->127.0.0.1:38961: write tcp 127.0.0.1:51210->127.0.0.1:38961: write: broken pipe
	
	
	==> storage-provisioner [bb1061431771955548a3e816610c52e5b83d923b7b2ed3a02d5890f2a635c519] <==
	I1019 12:52:16.352139       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:52:16.360640       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:52:16.360715       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 12:52:16.363116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:16.368935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:52:16.369107       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:52:16.369258       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-123864_1923e627-3a39-43be-b24b-8ee45e62074c!
	I1019 12:52:16.369250       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"45d62354-4f4f-445a-9d0d-795d15878b3f", APIVersion:"v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-123864_1923e627-3a39-43be-b24b-8ee45e62074c became leader
	W1019 12:52:16.374115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:16.377127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:52:16.470074       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-123864_1923e627-3a39-43be-b24b-8ee45e62074c!
	W1019 12:52:18.380132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:18.385695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:20.389002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:20.393746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:22.397147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:22.403063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:24.406698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:24.413635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:26.416817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:26.421279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-123864 -n embed-certs-123864
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-123864 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (233.180352ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:52:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-999693 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-999693 describe deploy/metrics-server -n kube-system: exit status 1 (58.994484ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-999693 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-999693
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-999693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0",
	        "Created": "2025-10-19T12:51:45.922696096Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 652627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:51:45.959901171Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0/hosts",
	        "LogPath": "/var/lib/docker/containers/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0-json.log",
	        "Name": "/default-k8s-diff-port-999693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-999693:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-999693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0",
	                "LowerDir": "/var/lib/docker/overlay2/3d016932c7c0e15b8492434e9df816bb70a3f0d2bf447aee756582d31ab21f0c-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d016932c7c0e15b8492434e9df816bb70a3f0d2bf447aee756582d31ab21f0c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d016932c7c0e15b8492434e9df816bb70a3f0d2bf447aee756582d31ab21f0c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d016932c7c0e15b8492434e9df816bb70a3f0d2bf447aee756582d31ab21f0c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-999693",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-999693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-999693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-999693",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-999693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "610e15c36bb1a60dfeb43652745a03819ef684ab83fc32eafeb25176a87287f6",
	            "SandboxKey": "/var/run/docker/netns/610e15c36bb1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-999693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:05:94:0f:65:71",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de90530a289272ed110d9eb21157ec5037120fb6575a550c928b9dda03629c85",
	                    "EndpointID": "f26b788b66f995403df19301dc7937c5ad51c9d2fac3d9392ca1c58f925da09b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-999693",
	                        "1ece3120c0d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-999693 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-999693 logs -n 25: (1.014329858s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-931932 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo containerd config dump                                                                                                                                                                                                  │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ ssh     │ -p bridge-931932 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-577062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo crio config                                                                                                                                                                                                             │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p bridge-931932                                                                                                                                                                                                                              │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ stop    │ -p old-k8s-version-577062 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p disable-driver-mounts-591165                                                                                                                                                                                                               │ disable-driver-mounts-591165 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-561408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ stop    │ -p no-preload-561408 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-577062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-561408 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p embed-certs-123864 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:52:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:52:06.021536  657553 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:52:06.021680  657553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:52:06.021694  657553 out.go:374] Setting ErrFile to fd 2...
	I1019 12:52:06.021700  657553 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:52:06.022131  657553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:52:06.022798  657553 out.go:368] Setting JSON to false
	I1019 12:52:06.025052  657553 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9274,"bootTime":1760869052,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:52:06.025253  657553 start.go:141] virtualization: kvm guest
	I1019 12:52:06.027267  657553 out.go:179] * [no-preload-561408] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:52:06.030479  657553 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:52:06.030471  657553 notify.go:220] Checking for updates...
	I1019 12:52:06.032737  657553 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:52:06.033834  657553 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:06.034905  657553 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:52:06.035945  657553 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:52:06.037060  657553 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:52:06.039056  657553 config.go:182] Loaded profile config "no-preload-561408": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:06.039841  657553 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:52:06.079502  657553 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:52:06.079756  657553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:52:06.176630  657553 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 12:52:06.161981405 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:52:06.176787  657553 docker.go:318] overlay module found
	I1019 12:52:06.178799  657553 out.go:179] * Using the docker driver based on existing profile
	I1019 12:52:06.180448  657553 start.go:305] selected driver: docker
	I1019 12:52:06.180466  657553 start.go:925] validating driver "docker" against &{Name:no-preload-561408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-561408 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:06.180576  657553 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:52:06.181482  657553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:52:06.302353  657553 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 12:52:06.286479089 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:52:06.303099  657553 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:06.303183  657553 cni.go:84] Creating CNI manager for ""
	I1019 12:52:06.303319  657553 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:06.303502  657553 start.go:349] cluster config:
	{Name:no-preload-561408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-561408 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:06.316881  657553 out.go:179] * Starting "no-preload-561408" primary control-plane node in "no-preload-561408" cluster
	I1019 12:52:06.318615  657553 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:52:06.322526  657553 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:52:06.327640  657553 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:06.327738  657553 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:52:06.328069  657553 cache.go:107] acquiring lock: {Name:mk5550171751fb66fbb8bbbf1840689496877f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328174  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1019 12:52:06.328185  657553 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 134.911µs
	I1019 12:52:06.328199  657553 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/config.json ...
	I1019 12:52:06.328214  657553 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1019 12:52:06.328234  657553 cache.go:107] acquiring lock: {Name:mk536b3e79f3c82320f5fd1d75cba698777893be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328282  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1019 12:52:06.328289  657553 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 60.101µs
	I1019 12:52:06.328297  657553 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1019 12:52:06.328309  657553 cache.go:107] acquiring lock: {Name:mke024304bcffc4ea281303157bf5c91e9430bca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328345  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1019 12:52:06.328352  657553 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 44.643µs
	I1019 12:52:06.328360  657553 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1019 12:52:06.328372  657553 cache.go:107] acquiring lock: {Name:mkd16e2a6ab077ae9b611f70a18ddfb328ed7273 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328405  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1019 12:52:06.328411  657553 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 40.864µs
	I1019 12:52:06.328418  657553 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1019 12:52:06.328451  657553 cache.go:107] acquiring lock: {Name:mk44b800128ce65419f1f04875d5a608ed0e5a0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328495  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1019 12:52:06.328503  657553 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 54.974µs
	I1019 12:52:06.328469  657553 cache.go:107] acquiring lock: {Name:mk553f2fd2502ef0a79fb07ecb498f641a6bf044 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328524  657553 cache.go:107] acquiring lock: {Name:mk45e746ba750b6a63de6802a26f6a78ae57ea53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328562  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1019 12:52:06.328559  657553 cache.go:107] acquiring lock: {Name:mk9ddd4589a738691f68fcba3df7072d33f92e6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.328592  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1019 12:52:06.328600  657553 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 77.93µs
	I1019 12:52:06.328605  657553 cache.go:115] /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1019 12:52:06.328608  657553 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1019 12:52:06.328513  657553 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1019 12:52:06.328571  657553 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 112.184µs
	I1019 12:52:06.328614  657553 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 57.288µs
	I1019 12:52:06.328618  657553 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1019 12:52:06.328622  657553 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21772-351705/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1019 12:52:06.328631  657553 cache.go:87] Successfully saved all images to host disk.
	I1019 12:52:06.359097  657553 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:52:06.359196  657553 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:52:06.359249  657553 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:52:06.359286  657553 start.go:360] acquireMachinesLock for no-preload-561408: {Name:mk03a123c2e4ac5bfd3445ed8fbfda61388ba21c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:06.359354  657553 start.go:364] duration metric: took 50.702µs to acquireMachinesLock for "no-preload-561408"
	I1019 12:52:06.359374  657553 start.go:96] Skipping create...Using existing machine configuration
	I1019 12:52:06.359380  657553 fix.go:54] fixHost starting: 
	I1019 12:52:06.359721  657553 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:52:06.386839  657553 fix.go:112] recreateIfNeeded on no-preload-561408: state=Stopped err=<nil>
	W1019 12:52:06.386966  657553 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 12:52:05.797514  651601 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:05.797581  651601 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:52:05.797681  651601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:05.830112  651601 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:05.830214  651601 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:05.830344  651601 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:05.830681  651601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33475 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:05.854590  651601 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33475 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:05.873824  651601 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:52:05.947416  651601 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:05.966348  651601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:05.974187  651601 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:06.176456  651601 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1019 12:52:06.178096  651601 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-999693" to be "Ready" ...
	I1019 12:52:06.491995  651601 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 12:52:03.856637  655442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-577062
	I1019 12:52:03.857815  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 12:52:03.857839  655442 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:52:03.857894  655442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-577062
	I1019 12:52:03.889135  655442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/old-k8s-version-577062/id_rsa Username:docker}
	I1019 12:52:03.892092  655442 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:03.892113  655442 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:03.892174  655442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-577062
	I1019 12:52:03.896084  655442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/old-k8s-version-577062/id_rsa Username:docker}
	I1019 12:52:03.924853  655442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/old-k8s-version-577062/id_rsa Username:docker}
	I1019 12:52:04.009788  655442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:04.011166  655442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:04.018393  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:52:04.018415  655442 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:52:04.028714  655442 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-577062" to be "Ready" ...
	I1019 12:52:04.035509  655442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:04.038340  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:52:04.038360  655442 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:52:04.054689  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:52:04.054717  655442 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:52:04.072845  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:52:04.072930  655442 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:52:04.091760  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:52:04.091798  655442 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:52:04.107831  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:52:04.107854  655442 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:52:04.124181  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:52:04.124212  655442 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:52:04.137076  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:52:04.137103  655442 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:52:04.150088  655442 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:04.150151  655442 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:52:04.163719  655442 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:06.275572  655442 node_ready.go:49] node "old-k8s-version-577062" is "Ready"
	I1019 12:52:06.275617  655442 node_ready.go:38] duration metric: took 2.246864402s for node "old-k8s-version-577062" to be "Ready" ...
	I1019 12:52:06.275636  655442 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:06.275697  655442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1019 12:52:02.118320  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	W1019 12:52:04.119176  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	W1019 12:52:06.121056  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	I1019 12:52:07.100520  655442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.08931381s)
	I1019 12:52:07.100640  655442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.065093835s)
	I1019 12:52:07.463378  655442 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.299599944s)
	I1019 12:52:07.463378  655442 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.187647784s)
	I1019 12:52:07.463513  655442 api_server.go:72] duration metric: took 3.636756668s to wait for apiserver process to appear ...
	I1019 12:52:07.463666  655442 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:07.463692  655442 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1019 12:52:07.465543  655442 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-577062 addons enable metrics-server
	
	I1019 12:52:07.467221  655442 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1019 12:52:06.493155  651601 addons.go:514] duration metric: took 737.35716ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 12:52:06.682068  651601 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-999693" context rescaled to 1 replicas
	W1019 12:52:08.180972  651601 node_ready.go:57] node "default-k8s-diff-port-999693" has "Ready":"False" status (will retry)
	W1019 12:52:10.181291  651601 node_ready.go:57] node "default-k8s-diff-port-999693" has "Ready":"False" status (will retry)
	I1019 12:52:06.389885  657553 out.go:252] * Restarting existing docker container for "no-preload-561408" ...
	I1019 12:52:06.390067  657553 cli_runner.go:164] Run: docker start no-preload-561408
	I1019 12:52:06.684685  657553 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:52:06.714763  657553 kic.go:430] container "no-preload-561408" state is running.
	I1019 12:52:06.715254  657553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-561408
	I1019 12:52:06.739105  657553 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/config.json ...
	I1019 12:52:06.739457  657553 machine.go:93] provisionDockerMachine start ...
	I1019 12:52:06.739597  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:06.760367  657553 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:06.760837  657553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1019 12:52:06.760854  657553 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:52:06.761610  657553 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47010->127.0.0.1:33485: read: connection reset by peer
	I1019 12:52:09.897308  657553 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-561408
	
	I1019 12:52:09.897338  657553 ubuntu.go:182] provisioning hostname "no-preload-561408"
	I1019 12:52:09.897406  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:09.915936  657553 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:09.916189  657553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1019 12:52:09.916204  657553 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-561408 && echo "no-preload-561408" | sudo tee /etc/hostname
	I1019 12:52:10.058776  657553 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-561408
	
	I1019 12:52:10.058861  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:10.077237  657553 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:10.077531  657553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1019 12:52:10.077559  657553 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-561408' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-561408/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-561408' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:52:10.213384  657553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:52:10.213449  657553 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:52:10.213484  657553 ubuntu.go:190] setting up certificates
	I1019 12:52:10.213500  657553 provision.go:84] configureAuth start
	I1019 12:52:10.213581  657553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-561408
	I1019 12:52:10.232099  657553 provision.go:143] copyHostCerts
	I1019 12:52:10.232168  657553 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:52:10.232188  657553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:52:10.232264  657553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:52:10.232389  657553 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:52:10.232402  657553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:52:10.232479  657553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:52:10.232584  657553 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:52:10.232596  657553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:52:10.232646  657553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:52:10.232742  657553 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.no-preload-561408 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-561408]
	I1019 12:52:10.474544  657553 provision.go:177] copyRemoteCerts
	I1019 12:52:10.474615  657553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:52:10.474662  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:10.493763  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:10.591892  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:52:10.610126  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:52:10.627794  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 12:52:10.645956  657553 provision.go:87] duration metric: took 432.438635ms to configureAuth
	I1019 12:52:10.645981  657553 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:52:10.646136  657553 config.go:182] Loaded profile config "no-preload-561408": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:10.646253  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:10.664566  657553 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:10.664836  657553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1019 12:52:10.664862  657553 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:52:10.954783  657553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:52:10.954815  657553 machine.go:96] duration metric: took 4.215306426s to provisionDockerMachine
	I1019 12:52:10.954831  657553 start.go:293] postStartSetup for "no-preload-561408" (driver="docker")
	I1019 12:52:10.954845  657553 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:52:10.954911  657553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:52:10.954960  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:10.973873  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:07.468334  655442 addons.go:514] duration metric: took 3.64150505s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1019 12:52:07.469589  655442 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1019 12:52:07.471181  655442 api_server.go:141] control plane version: v1.28.0
	I1019 12:52:07.471205  655442 api_server.go:131] duration metric: took 7.529968ms to wait for apiserver health ...
	I1019 12:52:07.471215  655442 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:07.476389  655442 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:07.476457  655442 system_pods.go:61] "coredns-5dd5756b68-44mqv" [360fd17f-a1ea-4400-85fa-dd78ab44fcbc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:07.476471  655442 system_pods.go:61] "etcd-old-k8s-version-577062" [1561017e-3d8c-4abb-b580-ea4eac44212a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:07.476481  655442 system_pods.go:61] "kindnet-2h26b" [357fe2d6-42b8-4f53-aa84-9fde0f804ee8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:07.476494  655442 system_pods.go:61] "kube-apiserver-old-k8s-version-577062" [836bda6f-5d8c-4bbc-833c-c563da74cbbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:07.476507  655442 system_pods.go:61] "kube-controller-manager-old-k8s-version-577062" [444afdc9-ca27-4986-9684-d3b8c191a406] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:07.476517  655442 system_pods.go:61] "kube-proxy-lhths" [3dba9194-393b-4f18-a6e5-057bd803c642] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:07.476574  655442 system_pods.go:61] "kube-scheduler-old-k8s-version-577062" [12c61412-0e63-4451-8b6d-70992b408f0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:07.476583  655442 system_pods.go:61] "storage-provisioner" [f97edd8d-a3ad-4339-a4c6-99bc764b5534] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:07.476594  655442 system_pods.go:74] duration metric: took 5.37114ms to wait for pod list to return data ...
	I1019 12:52:07.476608  655442 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:07.480455  655442 default_sa.go:45] found service account: "default"
	I1019 12:52:07.480476  655442 default_sa.go:55] duration metric: took 3.861262ms for default service account to be created ...
	I1019 12:52:07.480487  655442 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:07.486365  655442 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:07.486398  655442 system_pods.go:89] "coredns-5dd5756b68-44mqv" [360fd17f-a1ea-4400-85fa-dd78ab44fcbc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:07.486411  655442 system_pods.go:89] "etcd-old-k8s-version-577062" [1561017e-3d8c-4abb-b580-ea4eac44212a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:07.486451  655442 system_pods.go:89] "kindnet-2h26b" [357fe2d6-42b8-4f53-aa84-9fde0f804ee8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:07.486463  655442 system_pods.go:89] "kube-apiserver-old-k8s-version-577062" [836bda6f-5d8c-4bbc-833c-c563da74cbbb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:07.486472  655442 system_pods.go:89] "kube-controller-manager-old-k8s-version-577062" [444afdc9-ca27-4986-9684-d3b8c191a406] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:07.486482  655442 system_pods.go:89] "kube-proxy-lhths" [3dba9194-393b-4f18-a6e5-057bd803c642] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:07.486489  655442 system_pods.go:89] "kube-scheduler-old-k8s-version-577062" [12c61412-0e63-4451-8b6d-70992b408f0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:07.486497  655442 system_pods.go:89] "storage-provisioner" [f97edd8d-a3ad-4339-a4c6-99bc764b5534] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:07.486508  655442 system_pods.go:126] duration metric: took 6.013889ms to wait for k8s-apps to be running ...
	I1019 12:52:07.486519  655442 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:07.486570  655442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:07.506943  655442 system_svc.go:56] duration metric: took 20.409832ms WaitForService to wait for kubelet
	I1019 12:52:07.506976  655442 kubeadm.go:586] duration metric: took 3.680220176s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:07.506999  655442 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:07.510612  655442 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:07.510645  655442 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:07.510663  655442 node_conditions.go:105] duration metric: took 3.657575ms to run NodePressure ...
	I1019 12:52:07.510680  655442 start.go:241] waiting for startup goroutines ...
	I1019 12:52:07.510690  655442 start.go:246] waiting for cluster config update ...
	I1019 12:52:07.510708  655442 start.go:255] writing updated cluster config ...
	I1019 12:52:07.510964  655442 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:07.516010  655442 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:07.522233  655442 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-44mqv" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:52:09.527810  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:11.529100  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:08.618467  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	W1019 12:52:11.119035  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	I1019 12:52:11.071482  657553 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:52:11.075114  657553 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:52:11.075138  657553 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:52:11.075151  657553 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:52:11.075201  657553 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:52:11.075337  657553 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:52:11.075485  657553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:52:11.083624  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:11.101002  657553 start.go:296] duration metric: took 146.156667ms for postStartSetup
	I1019 12:52:11.101071  657553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:52:11.101106  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:11.119259  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:11.212600  657553 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:52:11.217365  657553 fix.go:56] duration metric: took 4.857978374s for fixHost
	I1019 12:52:11.217392  657553 start.go:83] releasing machines lock for "no-preload-561408", held for 4.858027541s
	I1019 12:52:11.217484  657553 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-561408
	I1019 12:52:11.235583  657553 ssh_runner.go:195] Run: cat /version.json
	I1019 12:52:11.235640  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:11.235690  657553 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:52:11.235743  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:11.254508  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:11.255069  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:11.402958  657553 ssh_runner.go:195] Run: systemctl --version
	I1019 12:52:11.409621  657553 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:52:11.444994  657553 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:52:11.449769  657553 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:52:11.449849  657553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:52:11.458371  657553 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:52:11.458399  657553 start.go:495] detecting cgroup driver to use...
	I1019 12:52:11.458447  657553 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:52:11.458504  657553 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:52:11.472650  657553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:52:11.484869  657553 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:52:11.484940  657553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:52:11.499590  657553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:52:11.512402  657553 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:52:11.596415  657553 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:52:11.679377  657553 docker.go:234] disabling docker service ...
	I1019 12:52:11.679480  657553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:52:11.694788  657553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:52:11.707377  657553 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:52:11.787388  657553 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:52:11.878281  657553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:52:11.890924  657553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:52:11.906692  657553 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:52:11.906751  657553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.915738  657553 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:52:11.915800  657553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.924468  657553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.933011  657553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.941471  657553 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:52:11.949504  657553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.958166  657553 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.966742  657553 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:11.975718  657553 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:52:11.982855  657553 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:52:11.989970  657553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:12.071282  657553 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:52:12.181511  657553 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:52:12.181582  657553 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:52:12.185571  657553 start.go:563] Will wait 60s for crictl version
	I1019 12:52:12.185623  657553 ssh_runner.go:195] Run: which crictl
	I1019 12:52:12.189194  657553 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:52:12.214572  657553 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:52:12.214640  657553 ssh_runner.go:195] Run: crio --version
	I1019 12:52:12.242554  657553 ssh_runner.go:195] Run: crio --version
	I1019 12:52:12.272501  657553 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:52:12.273754  657553 cli_runner.go:164] Run: docker network inspect no-preload-561408 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:52:12.291231  657553 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1019 12:52:12.295321  657553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:12.306201  657553 kubeadm.go:883] updating cluster {Name:no-preload-561408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-561408 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:52:12.306305  657553 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:12.306334  657553 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:12.338608  657553 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:12.338637  657553 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:52:12.338646  657553 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1019 12:52:12.338769  657553 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-561408 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-561408 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:52:12.338856  657553 ssh_runner.go:195] Run: crio config
	I1019 12:52:12.386474  657553 cni.go:84] Creating CNI manager for ""
	I1019 12:52:12.386501  657553 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:12.386523  657553 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:52:12.386564  657553 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-561408 NodeName:no-preload-561408 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:52:12.386734  657553 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-561408"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:52:12.386817  657553 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:52:12.395314  657553 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:52:12.395374  657553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:52:12.403260  657553 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 12:52:12.416734  657553 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:52:12.429722  657553 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1019 12:52:12.442398  657553 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:52:12.446399  657553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:12.457646  657553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:12.540361  657553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:12.562824  657553 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408 for IP: 192.168.94.2
	I1019 12:52:12.562847  657553 certs.go:195] generating shared ca certs ...
	I1019 12:52:12.562868  657553 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:12.563050  657553 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:52:12.563118  657553 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:52:12.563132  657553 certs.go:257] generating profile certs ...
	I1019 12:52:12.563244  657553 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/client.key
	I1019 12:52:12.563300  657553 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/apiserver.key.efacda45
	I1019 12:52:12.563355  657553 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/proxy-client.key
	I1019 12:52:12.563546  657553 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:52:12.563591  657553 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:52:12.563605  657553 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:52:12.563631  657553 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:52:12.563660  657553 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:52:12.563688  657553 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:52:12.563740  657553 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:12.564751  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:52:12.586015  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:52:12.605379  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:52:12.625545  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:52:12.650503  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 12:52:12.668376  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:52:12.686762  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:52:12.703747  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/no-preload-561408/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:52:12.721156  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:52:12.738772  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:52:12.757188  657553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:52:12.775463  657553 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:52:12.788297  657553 ssh_runner.go:195] Run: openssl version
	I1019 12:52:12.794682  657553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:52:12.803304  657553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:12.807123  657553 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:12.807184  657553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:12.843773  657553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:52:12.852395  657553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:52:12.860740  657553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:52:12.864509  657553 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:52:12.864565  657553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:52:12.902934  657553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:52:12.911575  657553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:52:12.920534  657553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:52:12.924261  657553 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:52:12.924318  657553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:52:12.959387  657553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:52:12.968395  657553 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:52:12.972181  657553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:52:13.007750  657553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:52:13.044824  657553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:52:13.093865  657553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:52:13.145444  657553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:52:13.203541  657553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:52:13.245821  657553 kubeadm.go:400] StartCluster: {Name:no-preload-561408 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-561408 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:13.245933  657553 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:52:13.245992  657553 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:52:13.278578  657553 cri.go:89] found id: "6c259b4325350a6198e9a1d8d0eac556ea213104568525890a93d7a828893ce4"
	I1019 12:52:13.278602  657553 cri.go:89] found id: "f7b8547c0e92276ea4aa3de0d1355f2d469801e321a4bd5e24851ac65d15e3d7"
	I1019 12:52:13.278606  657553 cri.go:89] found id: "9090a5b4e67c95d31bf16d2ca089106db1a0761e43d712e00a8bf33bc963353d"
	I1019 12:52:13.278609  657553 cri.go:89] found id: "01ed9d93f2579a1ea122d6b57e30a1236b2a3f66e97860cfecc6148cae01a115"
	I1019 12:52:13.278612  657553 cri.go:89] found id: ""
	I1019 12:52:13.278651  657553 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:52:13.291369  657553 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:52:13Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:52:13.291463  657553 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:52:13.300232  657553 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:52:13.300255  657553 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:52:13.300304  657553 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:52:13.309450  657553 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:52:13.310797  657553 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-561408" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:13.312106  657553 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-351705/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-561408" cluster setting kubeconfig missing "no-preload-561408" context setting]
	I1019 12:52:13.313508  657553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:13.315974  657553 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:52:13.324849  657553 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1019 12:52:13.324885  657553 kubeadm.go:601] duration metric: took 24.623091ms to restartPrimaryControlPlane
	I1019 12:52:13.324895  657553 kubeadm.go:402] duration metric: took 79.087378ms to StartCluster
	I1019 12:52:13.324916  657553 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:13.324984  657553 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:13.327319  657553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:13.327622  657553 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:52:13.327716  657553 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:52:13.327817  657553 config.go:182] Loaded profile config "no-preload-561408": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:13.327820  657553 addons.go:69] Setting storage-provisioner=true in profile "no-preload-561408"
	I1019 12:52:13.327838  657553 addons.go:238] Setting addon storage-provisioner=true in "no-preload-561408"
	W1019 12:52:13.327850  657553 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:52:13.327864  657553 addons.go:69] Setting default-storageclass=true in profile "no-preload-561408"
	I1019 12:52:13.327879  657553 host.go:66] Checking if "no-preload-561408" exists ...
	I1019 12:52:13.327879  657553 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-561408"
	I1019 12:52:13.327870  657553 addons.go:69] Setting dashboard=true in profile "no-preload-561408"
	I1019 12:52:13.327986  657553 addons.go:238] Setting addon dashboard=true in "no-preload-561408"
	W1019 12:52:13.327997  657553 addons.go:247] addon dashboard should already be in state true
	I1019 12:52:13.328040  657553 host.go:66] Checking if "no-preload-561408" exists ...
	I1019 12:52:13.328147  657553 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:52:13.328299  657553 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:52:13.328630  657553 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:52:13.329270  657553 out.go:179] * Verifying Kubernetes components...
	I1019 12:52:13.330520  657553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:13.354584  657553 addons.go:238] Setting addon default-storageclass=true in "no-preload-561408"
	W1019 12:52:13.354606  657553 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:52:13.354636  657553 host.go:66] Checking if "no-preload-561408" exists ...
	I1019 12:52:13.355105  657553 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:52:13.355108  657553 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:52:13.355108  657553 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 12:52:13.356935  657553 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:13.356955  657553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:52:13.356975  657553 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 12:52:13.357007  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:13.358223  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 12:52:13.358242  657553 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:52:13.358298  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:13.391736  657553 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:13.391767  657553 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:13.391838  657553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:52:13.392176  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:13.392389  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:13.416463  657553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:52:13.489291  657553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:13.506209  657553 node_ready.go:35] waiting up to 6m0s for node "no-preload-561408" to be "Ready" ...
	I1019 12:52:13.507053  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:52:13.507078  657553 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:52:13.510575  657553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:13.521920  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:52:13.521943  657553 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:52:13.534827  657553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:13.541245  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:52:13.541269  657553 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:52:13.558572  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:52:13.558597  657553 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:52:13.578361  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:52:13.578399  657553 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:52:13.592267  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:52:13.592294  657553 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:52:13.607060  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:52:13.607087  657553 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:52:13.621489  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:52:13.621511  657553 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:52:13.635208  657553 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:13.635232  657553 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:52:13.647649  657553 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:14.712976  657553 node_ready.go:49] node "no-preload-561408" is "Ready"
	I1019 12:52:14.713020  657553 node_ready.go:38] duration metric: took 1.20677668s for node "no-preload-561408" to be "Ready" ...
	I1019 12:52:14.713044  657553 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:14.713100  657553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:15.244946  657553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.734335605s)
	I1019 12:52:15.245021  657553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.710160862s)
	I1019 12:52:15.245115  657553 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.59742692s)
	I1019 12:52:15.245136  657553 api_server.go:72] duration metric: took 1.917480531s to wait for apiserver process to appear ...
	I1019 12:52:15.245145  657553 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:15.245162  657553 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:52:15.246598  657553 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-561408 addons enable metrics-server
	
	I1019 12:52:15.249535  657553 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:15.249558  657553 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:15.252364  657553 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1019 12:52:12.681796  651601 node_ready.go:57] node "default-k8s-diff-port-999693" has "Ready":"False" status (will retry)
	W1019 12:52:15.182126  651601 node_ready.go:57] node "default-k8s-diff-port-999693" has "Ready":"False" status (will retry)
	I1019 12:52:15.253341  657553 addons.go:514] duration metric: took 1.925639227s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 12:52:15.745567  657553 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:52:15.750652  657553 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:15.750680  657553 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:13.530582  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:16.029365  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:13.618473  641657 node_ready.go:57] node "embed-certs-123864" has "Ready":"False" status (will retry)
	I1019 12:52:16.118349  641657 node_ready.go:49] node "embed-certs-123864" is "Ready"
	I1019 12:52:16.118385  641657 node_ready.go:38] duration metric: took 41.00326347s for node "embed-certs-123864" to be "Ready" ...
	I1019 12:52:16.118405  641657 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:16.118476  641657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:16.131836  641657 api_server.go:72] duration metric: took 41.609178423s to wait for apiserver process to appear ...
	I1019 12:52:16.131860  641657 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:16.131881  641657 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:16.137339  641657 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 12:52:16.138264  641657 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:16.138287  641657 api_server.go:131] duration metric: took 6.421314ms to wait for apiserver health ...
	I1019 12:52:16.138295  641657 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:16.141339  641657 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:16.141370  641657 system_pods.go:61] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:16.141376  641657 system_pods.go:61] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running
	I1019 12:52:16.141383  641657 system_pods.go:61] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running
	I1019 12:52:16.141390  641657 system_pods.go:61] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running
	I1019 12:52:16.141398  641657 system_pods.go:61] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running
	I1019 12:52:16.141401  641657 system_pods.go:61] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running
	I1019 12:52:16.141405  641657 system_pods.go:61] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running
	I1019 12:52:16.141410  641657 system_pods.go:61] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:16.141416  641657 system_pods.go:74] duration metric: took 3.117331ms to wait for pod list to return data ...
	I1019 12:52:16.141466  641657 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:16.143704  641657 default_sa.go:45] found service account: "default"
	I1019 12:52:16.143719  641657 default_sa.go:55] duration metric: took 2.248215ms for default service account to be created ...
	I1019 12:52:16.143726  641657 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:16.146129  641657 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:16.146153  641657 system_pods.go:89] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:16.146158  641657 system_pods.go:89] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running
	I1019 12:52:16.146164  641657 system_pods.go:89] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running
	I1019 12:52:16.146167  641657 system_pods.go:89] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running
	I1019 12:52:16.146172  641657 system_pods.go:89] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running
	I1019 12:52:16.146175  641657 system_pods.go:89] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running
	I1019 12:52:16.146179  641657 system_pods.go:89] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running
	I1019 12:52:16.146184  641657 system_pods.go:89] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:16.146203  641657 retry.go:31] will retry after 285.400832ms: missing components: kube-dns
	I1019 12:52:16.436535  641657 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:16.436567  641657 system_pods.go:89] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:16.436572  641657 system_pods.go:89] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running
	I1019 12:52:16.436580  641657 system_pods.go:89] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running
	I1019 12:52:16.436584  641657 system_pods.go:89] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running
	I1019 12:52:16.436588  641657 system_pods.go:89] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running
	I1019 12:52:16.436592  641657 system_pods.go:89] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running
	I1019 12:52:16.436595  641657 system_pods.go:89] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running
	I1019 12:52:16.436599  641657 system_pods.go:89] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:16.436615  641657 retry.go:31] will retry after 310.044699ms: missing components: kube-dns
	I1019 12:52:16.750571  641657 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:16.750602  641657 system_pods.go:89] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Running
	I1019 12:52:16.750611  641657 system_pods.go:89] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running
	I1019 12:52:16.750616  641657 system_pods.go:89] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running
	I1019 12:52:16.750622  641657 system_pods.go:89] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running
	I1019 12:52:16.750627  641657 system_pods.go:89] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running
	I1019 12:52:16.750631  641657 system_pods.go:89] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running
	I1019 12:52:16.750636  641657 system_pods.go:89] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running
	I1019 12:52:16.750641  641657 system_pods.go:89] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Running
	I1019 12:52:16.750650  641657 system_pods.go:126] duration metric: took 606.917887ms to wait for k8s-apps to be running ...
	I1019 12:52:16.750663  641657 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:16.750723  641657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:16.764587  641657 system_svc.go:56] duration metric: took 13.912641ms WaitForService to wait for kubelet
	I1019 12:52:16.764619  641657 kubeadm.go:586] duration metric: took 42.241965825s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:16.764646  641657 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:16.767727  641657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:16.767757  641657 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:16.767773  641657 node_conditions.go:105] duration metric: took 3.120512ms to run NodePressure ...
	I1019 12:52:16.767786  641657 start.go:241] waiting for startup goroutines ...
	I1019 12:52:16.767800  641657 start.go:246] waiting for cluster config update ...
	I1019 12:52:16.767814  641657 start.go:255] writing updated cluster config ...
	I1019 12:52:16.768149  641657 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:16.773114  641657 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:16.777330  641657 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:16.782062  641657 pod_ready.go:94] pod "coredns-66bc5c9577-bw9l4" is "Ready"
	I1019 12:52:16.782086  641657 pod_ready.go:86] duration metric: took 4.735811ms for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:16.784129  641657 pod_ready.go:83] waiting for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:16.788298  641657 pod_ready.go:94] pod "etcd-embed-certs-123864" is "Ready"
	I1019 12:52:16.788321  641657 pod_ready.go:86] duration metric: took 4.171088ms for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:16.790285  641657 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:16.794219  641657 pod_ready.go:94] pod "kube-apiserver-embed-certs-123864" is "Ready"
	I1019 12:52:16.794240  641657 pod_ready.go:86] duration metric: took 3.934609ms for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:16.796138  641657 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:17.178090  641657 pod_ready.go:94] pod "kube-controller-manager-embed-certs-123864" is "Ready"
	I1019 12:52:17.178123  641657 pod_ready.go:86] duration metric: took 381.961365ms for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:17.378373  641657 pod_ready.go:83] waiting for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:17.778483  641657 pod_ready.go:94] pod "kube-proxy-gvrcz" is "Ready"
	I1019 12:52:17.778513  641657 pod_ready.go:86] duration metric: took 400.113683ms for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:17.977212  641657 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.378053  641657 pod_ready.go:94] pod "kube-scheduler-embed-certs-123864" is "Ready"
	I1019 12:52:18.378084  641657 pod_ready.go:86] duration metric: took 400.844139ms for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.378100  641657 pod_ready.go:40] duration metric: took 1.604950114s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:18.430990  641657 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:52:18.432726  641657 out.go:179] * Done! kubectl is now configured to use "embed-certs-123864" cluster and "default" namespace by default
	W1019 12:52:18.447296  641657 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 3dba5214-9c83-4eaa-8310-4210b4c1a3c4
	I1019 12:52:17.681389  651601 node_ready.go:49] node "default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:17.681417  651601 node_ready.go:38] duration metric: took 11.503278969s for node "default-k8s-diff-port-999693" to be "Ready" ...
	I1019 12:52:17.681450  651601 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:17.681503  651601 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:17.693486  651601 api_server.go:72] duration metric: took 11.937941722s to wait for apiserver process to appear ...
	I1019 12:52:17.693515  651601 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:17.693535  651601 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:17.697731  651601 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1019 12:52:17.698697  651601 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:17.698719  651601 api_server.go:131] duration metric: took 5.196854ms to wait for apiserver health ...
	I1019 12:52:17.698726  651601 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:17.701803  651601 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:17.701832  651601 system_pods.go:61] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:17.701837  651601 system_pods.go:61] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running
	I1019 12:52:17.701843  651601 system_pods.go:61] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:17.701846  651601 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running
	I1019 12:52:17.701850  651601 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running
	I1019 12:52:17.701857  651601 system_pods.go:61] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:17.701860  651601 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running
	I1019 12:52:17.701875  651601 system_pods.go:61] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:17.701884  651601 system_pods.go:74] duration metric: took 3.152261ms to wait for pod list to return data ...
	I1019 12:52:17.701891  651601 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:17.704119  651601 default_sa.go:45] found service account: "default"
	I1019 12:52:17.704135  651601 default_sa.go:55] duration metric: took 2.239807ms for default service account to be created ...
	I1019 12:52:17.704143  651601 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:17.706834  651601 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:17.706868  651601 system_pods.go:89] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:17.706875  651601 system_pods.go:89] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running
	I1019 12:52:17.706882  651601 system_pods.go:89] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:17.706886  651601 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running
	I1019 12:52:17.706889  651601 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running
	I1019 12:52:17.706892  651601 system_pods.go:89] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:17.706895  651601 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running
	I1019 12:52:17.706899  651601 system_pods.go:89] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:17.706920  651601 retry.go:31] will retry after 307.814167ms: missing components: kube-dns
	I1019 12:52:18.019475  651601 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:18.019507  651601 system_pods.go:89] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:18.019513  651601 system_pods.go:89] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running
	I1019 12:52:18.019519  651601 system_pods.go:89] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:18.019522  651601 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running
	I1019 12:52:18.019527  651601 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running
	I1019 12:52:18.019532  651601 system_pods.go:89] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:18.019545  651601 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running
	I1019 12:52:18.019556  651601 system_pods.go:89] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:18.019575  651601 retry.go:31] will retry after 347.626292ms: missing components: kube-dns
	I1019 12:52:18.371957  651601 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:18.371992  651601 system_pods.go:89] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Running
	I1019 12:52:18.372000  651601 system_pods.go:89] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running
	I1019 12:52:18.372011  651601 system_pods.go:89] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:18.372017  651601 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running
	I1019 12:52:18.372022  651601 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running
	I1019 12:52:18.372027  651601 system_pods.go:89] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:18.372032  651601 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running
	I1019 12:52:18.372037  651601 system_pods.go:89] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Running
	I1019 12:52:18.372049  651601 system_pods.go:126] duration metric: took 667.899222ms to wait for k8s-apps to be running ...
	I1019 12:52:18.372064  651601 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:18.372120  651601 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:18.387272  651601 system_svc.go:56] duration metric: took 15.199578ms WaitForService to wait for kubelet
	I1019 12:52:18.387298  651601 kubeadm.go:586] duration metric: took 12.63176127s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:18.387320  651601 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:18.390760  651601 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:18.390792  651601 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:18.390810  651601 node_conditions.go:105] duration metric: took 3.483692ms to run NodePressure ...
	I1019 12:52:18.390827  651601 start.go:241] waiting for startup goroutines ...
	I1019 12:52:18.390837  651601 start.go:246] waiting for cluster config update ...
	I1019 12:52:18.390851  651601 start.go:255] writing updated cluster config ...
	I1019 12:52:18.391134  651601 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:18.395142  651601 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:18.399443  651601 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.403935  651601 pod_ready.go:94] pod "coredns-66bc5c9577-hftjp" is "Ready"
	I1019 12:52:18.403962  651601 pod_ready.go:86] duration metric: took 4.493999ms for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.405940  651601 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.410036  651601 pod_ready.go:94] pod "etcd-default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:18.410058  651601 pod_ready.go:86] duration metric: took 4.097261ms for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.412299  651601 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.416083  651601 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:18.416102  651601 pod_ready.go:86] duration metric: took 3.780007ms for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.418113  651601 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:18.800332  651601 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:18.800368  651601 pod_ready.go:86] duration metric: took 382.232068ms for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:19.001010  651601 pod_ready.go:83] waiting for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:19.399840  651601 pod_ready.go:94] pod "kube-proxy-cjxjt" is "Ready"
	I1019 12:52:19.399867  651601 pod_ready.go:86] duration metric: took 398.825641ms for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:19.600330  651601 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:19.999629  651601 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:19.999672  651601 pod_ready.go:86] duration metric: took 399.317944ms for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:19.999688  651601 pod_ready.go:40] duration metric: took 1.604518436s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:20.061915  651601 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:52:20.064494  651601 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-999693" cluster and "default" namespace by default
	I1019 12:52:16.246140  657553 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:52:16.251353  657553 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1019 12:52:16.252365  657553 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:16.252392  657553 api_server.go:131] duration metric: took 1.007242213s to wait for apiserver health ...
	I1019 12:52:16.252404  657553 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:16.255472  657553 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:16.255505  657553 system_pods.go:61] "coredns-66bc5c9577-pgxlp" [af0816b7-b4de-4d64-a4bb-0efbc821bb53] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:16.255515  657553 system_pods.go:61] "etcd-no-preload-561408" [0d036058-49c8-4176-b416-ed28089e7035] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:16.255536  657553 system_pods.go:61] "kindnet-kq4cq" [1e5712d3-d393-4b98-8346-442229d87b07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:16.255549  657553 system_pods.go:61] "kube-apiserver-no-preload-561408" [83625aff-bb50-4376-b99f-b4a252a21b0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:16.255559  657553 system_pods.go:61] "kube-controller-manager-no-preload-561408" [da4db941-5094-47df-9cdf-ace923ff41ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:16.255567  657553 system_pods.go:61] "kube-proxy-lppwp" [cf6aee53-b434-4009-aeb6-36cb62fc0769] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:16.255580  657553 system_pods.go:61] "kube-scheduler-no-preload-561408" [55552cd1-c6f1-4b76-9b51-c78a1c7aac05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:16.255588  657553 system_pods.go:61] "storage-provisioner" [e8c92cd5-cb77-4b3d-bc5a-20b606b8794d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:16.255600  657553 system_pods.go:74] duration metric: took 3.184234ms to wait for pod list to return data ...
	I1019 12:52:16.255612  657553 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:16.257684  657553 default_sa.go:45] found service account: "default"
	I1019 12:52:16.257703  657553 default_sa.go:55] duration metric: took 2.081404ms for default service account to be created ...
	I1019 12:52:16.257712  657553 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:16.260072  657553 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:16.260095  657553 system_pods.go:89] "coredns-66bc5c9577-pgxlp" [af0816b7-b4de-4d64-a4bb-0efbc821bb53] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:16.260103  657553 system_pods.go:89] "etcd-no-preload-561408" [0d036058-49c8-4176-b416-ed28089e7035] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:16.260110  657553 system_pods.go:89] "kindnet-kq4cq" [1e5712d3-d393-4b98-8346-442229d87b07] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:16.260116  657553 system_pods.go:89] "kube-apiserver-no-preload-561408" [83625aff-bb50-4376-b99f-b4a252a21b0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:16.260121  657553 system_pods.go:89] "kube-controller-manager-no-preload-561408" [da4db941-5094-47df-9cdf-ace923ff41ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:16.260142  657553 system_pods.go:89] "kube-proxy-lppwp" [cf6aee53-b434-4009-aeb6-36cb62fc0769] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:16.260159  657553 system_pods.go:89] "kube-scheduler-no-preload-561408" [55552cd1-c6f1-4b76-9b51-c78a1c7aac05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:16.260167  657553 system_pods.go:89] "storage-provisioner" [e8c92cd5-cb77-4b3d-bc5a-20b606b8794d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:16.260179  657553 system_pods.go:126] duration metric: took 2.461251ms to wait for k8s-apps to be running ...
	I1019 12:52:16.260192  657553 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:16.260244  657553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:16.273038  657553 system_svc.go:56] duration metric: took 12.840667ms WaitForService to wait for kubelet
	I1019 12:52:16.273061  657553 kubeadm.go:586] duration metric: took 2.945407167s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:16.273089  657553 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:16.275467  657553 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:16.275490  657553 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:16.275504  657553 node_conditions.go:105] duration metric: took 2.40634ms to run NodePressure ...
	I1019 12:52:16.275519  657553 start.go:241] waiting for startup goroutines ...
	I1019 12:52:16.275529  657553 start.go:246] waiting for cluster config update ...
	I1019 12:52:16.275539  657553 start.go:255] writing updated cluster config ...
	I1019 12:52:16.275817  657553 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:16.279651  657553 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:16.282937  657553 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pgxlp" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:52:18.288317  657553 pod_ready.go:104] pod "coredns-66bc5c9577-pgxlp" is not "Ready", error: <nil>
	W1019 12:52:20.289843  657553 pod_ready.go:104] pod "coredns-66bc5c9577-pgxlp" is not "Ready", error: <nil>
	W1019 12:52:18.530110  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:21.029832  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:22.290218  657553 pod_ready.go:104] pod "coredns-66bc5c9577-pgxlp" is not "Ready", error: <nil>
	W1019 12:52:24.819087  657553 pod_ready.go:104] pod "coredns-66bc5c9577-pgxlp" is not "Ready", error: <nil>
	W1019 12:52:23.530687  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	W1019 12:52:26.028655  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 19 12:52:17 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:17.81739598Z" level=info msg="Starting container: 89d543c414cf8d3899498c6a6e4b0cf46a5b7be51ed0d4ebf2b41ea14e88d4f1" id=dc32c0a7-5b5a-4a6b-8fbb-5697c9b826e5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:52:17 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:17.819294354Z" level=info msg="Started container" PID=1846 containerID=89d543c414cf8d3899498c6a6e4b0cf46a5b7be51ed0d4ebf2b41ea14e88d4f1 description=kube-system/coredns-66bc5c9577-hftjp/coredns id=dc32c0a7-5b5a-4a6b-8fbb-5697c9b826e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=00ea6f4086311dcc1404f6a9be66c828e9857116710c1c99bde511c843f4f814
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.616701729Z" level=info msg="Running pod sandbox: default/busybox/POD" id=68c58641-0851-4aa7-9f8a-5585c22b0b50 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.616803702Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.622857272Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0758138b049d5bf3b9946c6928d5f0a07e0cc76695553261b11ed12822c4f60d UID:d1a7398f-f723-4f73-93f3-8aafc8fb32c1 NetNS:/var/run/netns/e57be23e-a24d-43e5-8cbf-25c787972080 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00003cba8}] Aliases:map[]}"
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.622951706Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.637142973Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0758138b049d5bf3b9946c6928d5f0a07e0cc76695553261b11ed12822c4f60d UID:d1a7398f-f723-4f73-93f3-8aafc8fb32c1 NetNS:/var/run/netns/e57be23e-a24d-43e5-8cbf-25c787972080 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00003cba8}] Aliases:map[]}"
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.637330528Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.638400888Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.639821888Z" level=info msg="Ran pod sandbox 0758138b049d5bf3b9946c6928d5f0a07e0cc76695553261b11ed12822c4f60d with infra container: default/busybox/POD" id=68c58641-0851-4aa7-9f8a-5585c22b0b50 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.64134146Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=43bcfd4c-a70c-40a3-9516-f70fa704bbc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.641644675Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=43bcfd4c-a70c-40a3-9516-f70fa704bbc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.641703102Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=43bcfd4c-a70c-40a3-9516-f70fa704bbc9 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.642598124Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f1fad6f-0c13-4bd6-ae45-e9bc380cf783 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:52:20 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:20.645371002Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 19 12:52:21 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:21.366010313Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0f1fad6f-0c13-4bd6-ae45-e9bc380cf783 name=/runtime.v1.ImageService/PullImage
	Oct 19 12:52:21 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:21.366823621Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=76dd68fa-6295-416e-970a-b2e93895e63b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:21 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:21.368371726Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eb594c79-673e-4820-8824-5652c13b508b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:21 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:21.371915321Z" level=info msg="Creating container: default/busybox/busybox" id=1e141e20-5583-4ff3-b555-d62eaf6e857d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:21 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:21.37280459Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:21 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:21.377548337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:21 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:21.378111817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:21 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:21.412503348Z" level=info msg="Created container 8bc804ac433cb702cdb16e8bda315fe927cda4645f9120a4fa83f0a8988de9e3: default/busybox/busybox" id=1e141e20-5583-4ff3-b555-d62eaf6e857d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:21 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:21.413386062Z" level=info msg="Starting container: 8bc804ac433cb702cdb16e8bda315fe927cda4645f9120a4fa83f0a8988de9e3" id=1c440421-0c1e-4485-b137-1637bd1569ec name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:52:21 default-k8s-diff-port-999693 crio[778]: time="2025-10-19T12:52:21.415802028Z" level=info msg="Started container" PID=1921 containerID=8bc804ac433cb702cdb16e8bda315fe927cda4645f9120a4fa83f0a8988de9e3 description=default/busybox/busybox id=1c440421-0c1e-4485-b137-1637bd1569ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=0758138b049d5bf3b9946c6928d5f0a07e0cc76695553261b11ed12822c4f60d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	8bc804ac433cb       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   0758138b049d5       busybox                                                default
	89d543c414cf8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   00ea6f4086311       coredns-66bc5c9577-hftjp                               kube-system
	c6f968ccd74a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   03d9235c42151       storage-provisioner                                    kube-system
	5605bd3430c57       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   220b825ffa3ae       kube-proxy-cjxjt                                       kube-system
	6fa538e27c955       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   da1e5ededf91e       kindnet-79bv6                                          kube-system
	951a45608891c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   f8d396e8d5231       kube-controller-manager-default-k8s-diff-port-999693   kube-system
	f0a88b2b9d280       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   f0fb6a514c1f7       kube-apiserver-default-k8s-diff-port-999693            kube-system
	1c0892c19fc3e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   2b2bc28462d90       kube-scheduler-default-k8s-diff-port-999693            kube-system
	4006b727ff3c6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   8b3752c01a529       etcd-default-k8s-diff-port-999693                      kube-system
	
	
	==> coredns [89d543c414cf8d3899498c6a6e4b0cf46a5b7be51ed0d4ebf2b41ea14e88d4f1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43745 - 62893 "HINFO IN 1234914726354768846.4145108560975499023. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.070232853s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-999693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-999693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=default-k8s-diff-port-999693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_52_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:51:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-999693
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:52:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:52:20 +0000   Sun, 19 Oct 2025 12:51:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:52:20 +0000   Sun, 19 Oct 2025 12:51:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:52:20 +0000   Sun, 19 Oct 2025 12:51:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:52:20 +0000   Sun, 19 Oct 2025 12:52:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-999693
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                dba8bd18-ed7d-4c69-88aa-2713b680a799
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-hftjp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-999693                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-79bv6                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-999693             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-999693    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-cjxjt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-999693             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-999693 event: Registered Node default-k8s-diff-port-999693 in Controller
	  Normal  NodeReady                12s   kubelet          Node default-k8s-diff-port-999693 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [4006b727ff3c64dc480666c4f9eb7aee9a68f78566b9474e686528d2bf8bf071] <==
	{"level":"warn","ts":"2025-10-19T12:51:57.168307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.176918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.186134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.193581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.200299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.207149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.214519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.221034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.227326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.234707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.240690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.247642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.254094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.260473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.273335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.286098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.293309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.299838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.306847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.314603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.321791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.337644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.346414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.353479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:51:57.402455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34256","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:52:29 up  2:34,  0 user,  load average: 5.71, 5.01, 3.11
	Linux default-k8s-diff-port-999693 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6fa538e27c955d8efe1455c7721e034506e8586196a78f4248124b478a552c27] <==
	I1019 12:52:06.692174       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:52:06.692410       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 12:52:06.692555       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:52:06.692570       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:52:06.692579       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:52:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:52:06.987797       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:52:06.987842       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:52:06.987862       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:52:06.988071       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:52:07.287969       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:52:07.288016       1 metrics.go:72] Registering metrics
	I1019 12:52:07.288088       1 controller.go:711] "Syncing nftables rules"
	I1019 12:52:16.901727       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:52:16.901803       1 main.go:301] handling current node
	I1019 12:52:26.901529       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:52:26.901569       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f0a88b2b9d2803c52127b8d54c41adc3ae47e65aebd07ba22c3576fbffe884d5] <==
	I1019 12:51:57.930250       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1019 12:51:57.932074       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:51:57.936140       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:51:57.936264       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 12:51:57.943086       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:51:57.943184       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:51:57.961661       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:51:58.834183       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 12:51:58.837930       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 12:51:58.837945       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:51:59.282638       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:51:59.319309       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:51:59.441796       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 12:51:59.447754       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1019 12:51:59.448851       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:51:59.453552       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:51:59.866694       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:52:00.237938       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:52:00.246276       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 12:52:00.253313       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 12:52:05.519480       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:52:05.670082       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:52:05.675561       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:52:05.974343       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1019 12:52:28.398120       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:48728: use of closed network connection
	
	
	==> kube-controller-manager [951a45608891c66daf58a2754c842809fc2a5a8926605aa868a66e2d58880bf7] <==
	I1019 12:52:04.864759       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-999693"
	I1019 12:52:04.864810       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 12:52:04.865035       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 12:52:04.865042       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:52:04.865833       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 12:52:04.865848       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:52:04.865871       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 12:52:04.865891       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1019 12:52:04.865961       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 12:52:04.866003       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:52:04.866029       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 12:52:04.866266       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 12:52:04.866406       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 12:52:04.866781       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 12:52:04.867982       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 12:52:04.869362       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:52:04.870884       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:52:04.876046       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 12:52:04.884229       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 12:52:04.884290       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 12:52:04.885377       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 12:52:04.885451       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:52:04.885463       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:52:04.891535       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 12:52:19.867175       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [5605bd3430c5721195f495cc53fffd2fb6d94e37e68f0b1b57218bd5d2785cdc] <==
	I1019 12:52:06.461341       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:52:06.524122       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:52:06.625128       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:52:06.625165       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 12:52:06.625252       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:52:06.645531       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:52:06.645618       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:52:06.651443       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:52:06.651906       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:52:06.651951       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:06.654169       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:52:06.654318       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:52:06.654569       1 config.go:200] "Starting service config controller"
	I1019 12:52:06.654618       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:52:06.654670       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:52:06.654702       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:52:06.655069       1 config.go:309] "Starting node config controller"
	I1019 12:52:06.655092       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:52:06.655100       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:52:06.754545       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:52:06.755389       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:52:06.755495       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1c0892c19fc3e41dbf7ecd6f1b10081bae0370dc791df2e0fae3ae711ebe2205] <==
	E1019 12:51:57.874953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:51:57.875012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:51:57.875012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:51:57.875068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:51:57.875101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:51:57.875122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:51:57.875136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 12:51:57.875181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:51:57.875211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:51:57.875237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:51:57.875236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:51:58.712821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:51:58.734154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:51:58.742162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:51:58.826814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:51:58.942569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:51:58.963623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:51:58.969609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:51:59.017654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:51:59.021616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:51:59.067728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:51:59.091320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:51:59.116663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 12:51:59.206470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1019 12:52:02.271540       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:52:01 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:01.117825    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-999693" podStartSLOduration=1.117809725 podStartE2EDuration="1.117809725s" podCreationTimestamp="2025-10-19 12:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:52:01.117158532 +0000 UTC m=+1.116006368" watchObservedRunningTime="2025-10-19 12:52:01.117809725 +0000 UTC m=+1.116657561"
	Oct 19 12:52:01 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:01.126834    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-999693" podStartSLOduration=1.126818031 podStartE2EDuration="1.126818031s" podCreationTimestamp="2025-10-19 12:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:52:01.126755191 +0000 UTC m=+1.125603030" watchObservedRunningTime="2025-10-19 12:52:01.126818031 +0000 UTC m=+1.125665868"
	Oct 19 12:52:01 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:01.137119    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-999693" podStartSLOduration=1.137102088 podStartE2EDuration="1.137102088s" podCreationTimestamp="2025-10-19 12:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:52:01.136978328 +0000 UTC m=+1.135826179" watchObservedRunningTime="2025-10-19 12:52:01.137102088 +0000 UTC m=+1.135949931"
	Oct 19 12:52:04 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:04.922563    1311 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 12:52:04 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:04.923236    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 12:52:06 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:06.124286    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/662f6b7b-b302-4d2c-b6b0-c3def258b315-xtables-lock\") pod \"kube-proxy-cjxjt\" (UID: \"662f6b7b-b302-4d2c-b6b0-c3def258b315\") " pod="kube-system/kube-proxy-cjxjt"
	Oct 19 12:52:06 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:06.124913    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/662f6b7b-b302-4d2c-b6b0-c3def258b315-kube-proxy\") pod \"kube-proxy-cjxjt\" (UID: \"662f6b7b-b302-4d2c-b6b0-c3def258b315\") " pod="kube-system/kube-proxy-cjxjt"
	Oct 19 12:52:06 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:06.125033    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jsnv\" (UniqueName: \"kubernetes.io/projected/662f6b7b-b302-4d2c-b6b0-c3def258b315-kube-api-access-7jsnv\") pod \"kube-proxy-cjxjt\" (UID: \"662f6b7b-b302-4d2c-b6b0-c3def258b315\") " pod="kube-system/kube-proxy-cjxjt"
	Oct 19 12:52:06 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:06.125066    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f614301-5daf-43cc-9013-94bf6d7d161a-xtables-lock\") pod \"kindnet-79bv6\" (UID: \"6f614301-5daf-43cc-9013-94bf6d7d161a\") " pod="kube-system/kindnet-79bv6"
	Oct 19 12:52:06 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:06.125132    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvljz\" (UniqueName: \"kubernetes.io/projected/6f614301-5daf-43cc-9013-94bf6d7d161a-kube-api-access-gvljz\") pod \"kindnet-79bv6\" (UID: \"6f614301-5daf-43cc-9013-94bf6d7d161a\") " pod="kube-system/kindnet-79bv6"
	Oct 19 12:52:06 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:06.125201    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/662f6b7b-b302-4d2c-b6b0-c3def258b315-lib-modules\") pod \"kube-proxy-cjxjt\" (UID: \"662f6b7b-b302-4d2c-b6b0-c3def258b315\") " pod="kube-system/kube-proxy-cjxjt"
	Oct 19 12:52:06 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:06.125222    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6f614301-5daf-43cc-9013-94bf6d7d161a-cni-cfg\") pod \"kindnet-79bv6\" (UID: \"6f614301-5daf-43cc-9013-94bf6d7d161a\") " pod="kube-system/kindnet-79bv6"
	Oct 19 12:52:06 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:06.125292    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f614301-5daf-43cc-9013-94bf6d7d161a-lib-modules\") pod \"kindnet-79bv6\" (UID: \"6f614301-5daf-43cc-9013-94bf6d7d161a\") " pod="kube-system/kindnet-79bv6"
	Oct 19 12:52:07 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:07.128646    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cjxjt" podStartSLOduration=2.128626563 podStartE2EDuration="2.128626563s" podCreationTimestamp="2025-10-19 12:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:52:07.128433938 +0000 UTC m=+7.127281765" watchObservedRunningTime="2025-10-19 12:52:07.128626563 +0000 UTC m=+7.127474400"
	Oct 19 12:52:11 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:11.525637    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-79bv6" podStartSLOduration=6.525613964 podStartE2EDuration="6.525613964s" podCreationTimestamp="2025-10-19 12:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:52:07.141905633 +0000 UTC m=+7.140753469" watchObservedRunningTime="2025-10-19 12:52:11.525613964 +0000 UTC m=+11.524461803"
	Oct 19 12:52:17 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:17.418301    1311 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 19 12:52:17 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:17.509438    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq5bn\" (UniqueName: \"kubernetes.io/projected/53c60896-3b7d-4f84-bc9d-6eb228b511b7-kube-api-access-fq5bn\") pod \"coredns-66bc5c9577-hftjp\" (UID: \"53c60896-3b7d-4f84-bc9d-6eb228b511b7\") " pod="kube-system/coredns-66bc5c9577-hftjp"
	Oct 19 12:52:17 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:17.509484    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cx77\" (UniqueName: \"kubernetes.io/projected/1446462f-3c0a-4cf9-b8a5-7b8096844759-kube-api-access-6cx77\") pod \"storage-provisioner\" (UID: \"1446462f-3c0a-4cf9-b8a5-7b8096844759\") " pod="kube-system/storage-provisioner"
	Oct 19 12:52:17 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:17.509503    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1446462f-3c0a-4cf9-b8a5-7b8096844759-tmp\") pod \"storage-provisioner\" (UID: \"1446462f-3c0a-4cf9-b8a5-7b8096844759\") " pod="kube-system/storage-provisioner"
	Oct 19 12:52:17 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:17.509559    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53c60896-3b7d-4f84-bc9d-6eb228b511b7-config-volume\") pod \"coredns-66bc5c9577-hftjp\" (UID: \"53c60896-3b7d-4f84-bc9d-6eb228b511b7\") " pod="kube-system/coredns-66bc5c9577-hftjp"
	Oct 19 12:52:18 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:18.152869    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hftjp" podStartSLOduration=12.152849951 podStartE2EDuration="12.152849951s" podCreationTimestamp="2025-10-19 12:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:52:18.152361346 +0000 UTC m=+18.151209183" watchObservedRunningTime="2025-10-19 12:52:18.152849951 +0000 UTC m=+18.151697788"
	Oct 19 12:52:18 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:18.175255    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.175229594 podStartE2EDuration="12.175229594s" podCreationTimestamp="2025-10-19 12:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:52:18.162218589 +0000 UTC m=+18.161066426" watchObservedRunningTime="2025-10-19 12:52:18.175229594 +0000 UTC m=+18.174077430"
	Oct 19 12:52:20 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:20.430522    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm5bb\" (UniqueName: \"kubernetes.io/projected/d1a7398f-f723-4f73-93f3-8aafc8fb32c1-kube-api-access-pm5bb\") pod \"busybox\" (UID: \"d1a7398f-f723-4f73-93f3-8aafc8fb32c1\") " pod="default/busybox"
	Oct 19 12:52:22 default-k8s-diff-port-999693 kubelet[1311]: I1019 12:52:22.170177    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.444384882 podStartE2EDuration="2.170152873s" podCreationTimestamp="2025-10-19 12:52:20 +0000 UTC" firstStartedPulling="2025-10-19 12:52:20.642036735 +0000 UTC m=+20.640884552" lastFinishedPulling="2025-10-19 12:52:21.367804708 +0000 UTC m=+21.366652543" observedRunningTime="2025-10-19 12:52:22.169886845 +0000 UTC m=+22.168734714" watchObservedRunningTime="2025-10-19 12:52:22.170152873 +0000 UTC m=+22.169000710"
	Oct 19 12:52:28 default-k8s-diff-port-999693 kubelet[1311]: E1019 12:52:28.398129    1311 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41942->127.0.0.1:44427: write tcp 127.0.0.1:41942->127.0.0.1:44427: write: broken pipe
	
	
	==> storage-provisioner [c6f968ccd74a3fc10edc80106090b1161029723854e07dab6e9f62d1fd483e4c] <==
	I1019 12:52:17.825311       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:52:17.833450       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:52:17.833500       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 12:52:17.835776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:17.840880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:52:17.841019       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:52:17.841145       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"94ffe2ba-d9f2-4be7-afb9-f7f386e949ce", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-999693_57b1213d-1812-4744-9410-b097a9523942 became leader
	I1019 12:52:17.841174       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-999693_57b1213d-1812-4744-9410-b097a9523942!
	W1019 12:52:17.843731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:17.848763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:52:17.942281       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-999693_57b1213d-1812-4744-9410-b097a9523942!
	W1019 12:52:19.852696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:19.858833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:21.863013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:21.867579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:23.871313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:23.876542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:25.880179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:25.885571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:27.888598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:27.892744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:29.895904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:29.900498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-999693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-577062 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-577062 --alsologtostderr -v=1: exit status 80 (2.165899021s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-577062 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:53:00.292496  668230 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:53:00.292844  668230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:00.292856  668230 out.go:374] Setting ErrFile to fd 2...
	I1019 12:53:00.292861  668230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:00.293159  668230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:53:00.293496  668230 out.go:368] Setting JSON to false
	I1019 12:53:00.293552  668230 mustload.go:65] Loading cluster: old-k8s-version-577062
	I1019 12:53:00.294062  668230 config.go:182] Loaded profile config "old-k8s-version-577062": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 12:53:00.294681  668230 cli_runner.go:164] Run: docker container inspect old-k8s-version-577062 --format={{.State.Status}}
	I1019 12:53:00.316512  668230 host.go:66] Checking if "old-k8s-version-577062" exists ...
	I1019 12:53:00.317115  668230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:00.392623  668230 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-19 12:53:00.380280354 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:00.393589  668230 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-577062 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 12:53:00.395454  668230 out.go:179] * Pausing node old-k8s-version-577062 ... 
	I1019 12:53:00.396703  668230 host.go:66] Checking if "old-k8s-version-577062" exists ...
	I1019 12:53:00.397157  668230 ssh_runner.go:195] Run: systemctl --version
	I1019 12:53:00.397209  668230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-577062
	I1019 12:53:00.419127  668230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/old-k8s-version-577062/id_rsa Username:docker}
	I1019 12:53:00.530463  668230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:00.553589  668230 pause.go:52] kubelet running: true
	I1019 12:53:00.553697  668230 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:00.791887  668230 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:00.791992  668230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:00.888379  668230 cri.go:89] found id: "b22cedaa72f076deb56a9e65cbf65d4fedd7743c72f9de44745670d3da78cd44"
	I1019 12:53:00.888410  668230 cri.go:89] found id: "831f176d66e63a51f4bc180ce401d4ecda5e783f443e4ffd91216fd1999c8eef"
	I1019 12:53:00.888415  668230 cri.go:89] found id: "bca9cb8e7e1a4789fce59ad4a5788c1e7058d9f9e7ec1057f342040b015717bc"
	I1019 12:53:00.888418  668230 cri.go:89] found id: "e9c3dda964119fe6efea193da287473cefe468088e2bca9f9cf19321e2a8bfeb"
	I1019 12:53:00.888451  668230 cri.go:89] found id: "a9a54186737cc9a1243f50a29cf83a48c7326a7fa8a8c9b9f0a830c882f6d33f"
	I1019 12:53:00.888457  668230 cri.go:89] found id: "ba25f6a999b0c5ae02f451d523de313de12a4d3d20296a8becbbee6fa1a54b92"
	I1019 12:53:00.888461  668230 cri.go:89] found id: "fbf4c9d76e1dbee5411f82439799eddfa94579d729009e817ab32efa62aa037b"
	I1019 12:53:00.888466  668230 cri.go:89] found id: "8577c744298fa841bb6cdfc8e4e7b5ca9854b6075ef4d4ee96ca794f243de677"
	I1019 12:53:00.888470  668230 cri.go:89] found id: "2c9fe6c9b1b32926f91a1bde357e191e5e1e3b8139fa61a8202db438bcecf6d3"
	I1019 12:53:00.888477  668230 cri.go:89] found id: "29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b"
	I1019 12:53:00.888485  668230 cri.go:89] found id: "141891d9bcecd7b8f29e6a840f8c01c263be938405ca6b55629648a298625543"
	I1019 12:53:00.888489  668230 cri.go:89] found id: ""
	I1019 12:53:00.888537  668230 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:00.904933  668230 retry.go:31] will retry after 153.383853ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:00Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:53:01.059339  668230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:01.075864  668230 pause.go:52] kubelet running: false
	I1019 12:53:01.075954  668230 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:01.332796  668230 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:01.332923  668230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:01.466122  668230 cri.go:89] found id: "b22cedaa72f076deb56a9e65cbf65d4fedd7743c72f9de44745670d3da78cd44"
	I1019 12:53:01.466154  668230 cri.go:89] found id: "831f176d66e63a51f4bc180ce401d4ecda5e783f443e4ffd91216fd1999c8eef"
	I1019 12:53:01.466161  668230 cri.go:89] found id: "bca9cb8e7e1a4789fce59ad4a5788c1e7058d9f9e7ec1057f342040b015717bc"
	I1019 12:53:01.466165  668230 cri.go:89] found id: "e9c3dda964119fe6efea193da287473cefe468088e2bca9f9cf19321e2a8bfeb"
	I1019 12:53:01.466169  668230 cri.go:89] found id: "a9a54186737cc9a1243f50a29cf83a48c7326a7fa8a8c9b9f0a830c882f6d33f"
	I1019 12:53:01.466174  668230 cri.go:89] found id: "ba25f6a999b0c5ae02f451d523de313de12a4d3d20296a8becbbee6fa1a54b92"
	I1019 12:53:01.466178  668230 cri.go:89] found id: "fbf4c9d76e1dbee5411f82439799eddfa94579d729009e817ab32efa62aa037b"
	I1019 12:53:01.466182  668230 cri.go:89] found id: "8577c744298fa841bb6cdfc8e4e7b5ca9854b6075ef4d4ee96ca794f243de677"
	I1019 12:53:01.466186  668230 cri.go:89] found id: "2c9fe6c9b1b32926f91a1bde357e191e5e1e3b8139fa61a8202db438bcecf6d3"
	I1019 12:53:01.466193  668230 cri.go:89] found id: "29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b"
	I1019 12:53:01.466197  668230 cri.go:89] found id: "141891d9bcecd7b8f29e6a840f8c01c263be938405ca6b55629648a298625543"
	I1019 12:53:01.466201  668230 cri.go:89] found id: ""
	I1019 12:53:01.466243  668230 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:01.485154  668230 retry.go:31] will retry after 516.743574ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:01Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:53:02.002655  668230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:02.021873  668230 pause.go:52] kubelet running: false
	I1019 12:53:02.021941  668230 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:02.256196  668230 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:02.256294  668230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:02.360531  668230 cri.go:89] found id: "b22cedaa72f076deb56a9e65cbf65d4fedd7743c72f9de44745670d3da78cd44"
	I1019 12:53:02.360560  668230 cri.go:89] found id: "831f176d66e63a51f4bc180ce401d4ecda5e783f443e4ffd91216fd1999c8eef"
	I1019 12:53:02.360567  668230 cri.go:89] found id: "bca9cb8e7e1a4789fce59ad4a5788c1e7058d9f9e7ec1057f342040b015717bc"
	I1019 12:53:02.360572  668230 cri.go:89] found id: "e9c3dda964119fe6efea193da287473cefe468088e2bca9f9cf19321e2a8bfeb"
	I1019 12:53:02.360584  668230 cri.go:89] found id: "a9a54186737cc9a1243f50a29cf83a48c7326a7fa8a8c9b9f0a830c882f6d33f"
	I1019 12:53:02.360589  668230 cri.go:89] found id: "ba25f6a999b0c5ae02f451d523de313de12a4d3d20296a8becbbee6fa1a54b92"
	I1019 12:53:02.360593  668230 cri.go:89] found id: "fbf4c9d76e1dbee5411f82439799eddfa94579d729009e817ab32efa62aa037b"
	I1019 12:53:02.360596  668230 cri.go:89] found id: "8577c744298fa841bb6cdfc8e4e7b5ca9854b6075ef4d4ee96ca794f243de677"
	I1019 12:53:02.360601  668230 cri.go:89] found id: "2c9fe6c9b1b32926f91a1bde357e191e5e1e3b8139fa61a8202db438bcecf6d3"
	I1019 12:53:02.360610  668230 cri.go:89] found id: "29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b"
	I1019 12:53:02.360614  668230 cri.go:89] found id: "141891d9bcecd7b8f29e6a840f8c01c263be938405ca6b55629648a298625543"
	I1019 12:53:02.360618  668230 cri.go:89] found id: ""
	I1019 12:53:02.360665  668230 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:02.382525  668230 out.go:203] 
	W1019 12:53:02.384293  668230 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:53:02.384317  668230 out.go:285] * 
	* 
	W1019 12:53:02.392154  668230 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:53:02.394014  668230 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-577062 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-577062
helpers_test.go:243: (dbg) docker inspect old-k8s-version-577062:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da",
	        "Created": "2025-10-19T12:50:42.983195608Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 655637,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:51:57.060341737Z",
	            "FinishedAt": "2025-10-19T12:51:56.143621748Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da/hostname",
	        "HostsPath": "/var/lib/docker/containers/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da/hosts",
	        "LogPath": "/var/lib/docker/containers/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da-json.log",
	        "Name": "/old-k8s-version-577062",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-577062:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-577062",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da",
	                "LowerDir": "/var/lib/docker/overlay2/ad482f3956284773e120f9065cdd7f07802861d1771e61bb563b338ed1005a40-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad482f3956284773e120f9065cdd7f07802861d1771e61bb563b338ed1005a40/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad482f3956284773e120f9065cdd7f07802861d1771e61bb563b338ed1005a40/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad482f3956284773e120f9065cdd7f07802861d1771e61bb563b338ed1005a40/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-577062",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-577062/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-577062",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-577062",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-577062",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ba102986ccec709fe88a6b60c1d89d7d3e8d3623ff784198d3d0477dd33e85c",
	            "SandboxKey": "/var/run/docker/netns/7ba102986cce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-577062": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:cf:7e:e7:d9:a8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "502db93731f3c65b158cfaea0389f311a4314988a15a727b3ce6c492ca19cd92",
	                    "EndpointID": "36f1415fd6b4505712ea9dedcb743451dc54335b49d3ae816a9dcd0a88c25554",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-577062",
	                        "368928979a17"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-577062 -n old-k8s-version-577062
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-577062 -n old-k8s-version-577062: exit status 2 (424.445858ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-577062 logs -n 25
E1019 12:53:02.889159  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-577062 logs -n 25: (1.530425917s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-931932 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-577062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo crio config                                                                                                                                                                                                             │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p bridge-931932                                                                                                                                                                                                                              │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ stop    │ -p old-k8s-version-577062 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p disable-driver-mounts-591165                                                                                                                                                                                                               │ disable-driver-mounts-591165 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-561408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ stop    │ -p no-preload-561408 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-577062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p no-preload-561408 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p embed-certs-123864 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-999693 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-123864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-999693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ image   │ old-k8s-version-577062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p old-k8s-version-577062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ no-preload-561408 image list --format=json                                                                                                                                                                                                    │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p no-preload-561408 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:52:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:52:46.925201  664256 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:52:46.925511  664256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:52:46.925521  664256 out.go:374] Setting ErrFile to fd 2...
	I1019 12:52:46.925526  664256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:52:46.925724  664256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:52:46.926177  664256 out.go:368] Setting JSON to false
	I1019 12:52:46.927476  664256 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9315,"bootTime":1760869052,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:52:46.927572  664256 start.go:141] virtualization: kvm guest
	I1019 12:52:46.929196  664256 out.go:179] * [default-k8s-diff-port-999693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:52:46.930756  664256 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:52:46.930801  664256 notify.go:220] Checking for updates...
	I1019 12:52:46.932758  664256 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:52:46.934048  664256 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:46.935192  664256 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:52:46.936498  664256 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:52:46.937762  664256 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:52:46.939394  664256 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:46.939848  664256 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:52:46.963683  664256 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:52:46.963772  664256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:52:47.023378  664256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 12:52:47.013329476 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:52:47.023535  664256 docker.go:318] overlay module found
	I1019 12:52:47.025269  664256 out.go:179] * Using the docker driver based on existing profile
	I1019 12:52:47.026568  664256 start.go:305] selected driver: docker
	I1019 12:52:47.026597  664256 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:47.026732  664256 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:52:47.027471  664256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:52:47.086363  664256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 12:52:47.076802932 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:52:47.086679  664256 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:47.086707  664256 cni.go:84] Creating CNI manager for ""
	I1019 12:52:47.086755  664256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:47.086787  664256 start.go:349] cluster config:
	{Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:47.088476  664256 out.go:179] * Starting "default-k8s-diff-port-999693" primary control-plane node in "default-k8s-diff-port-999693" cluster
	I1019 12:52:47.089564  664256 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:52:47.090727  664256 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:52:47.091742  664256 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:47.091773  664256 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:52:47.091781  664256 cache.go:58] Caching tarball of preloaded images
	I1019 12:52:47.091796  664256 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:52:47.091859  664256 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:52:47.091870  664256 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:52:47.091959  664256 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/config.json ...
	I1019 12:52:47.112105  664256 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:52:47.112128  664256 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:52:47.112142  664256 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:52:47.112172  664256 start.go:360] acquireMachinesLock for default-k8s-diff-port-999693: {Name:mke26e7439408c8adecea1bbb9344a31dd77b3c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:47.112226  664256 start.go:364] duration metric: took 36.455µs to acquireMachinesLock for "default-k8s-diff-port-999693"
	I1019 12:52:47.112245  664256 start.go:96] Skipping create...Using existing machine configuration
	I1019 12:52:47.112252  664256 fix.go:54] fixHost starting: 
	I1019 12:52:47.112490  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:47.129772  664256 fix.go:112] recreateIfNeeded on default-k8s-diff-port-999693: state=Stopped err=<nil>
	W1019 12:52:47.129802  664256 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 12:52:44.281015  663517 out.go:252] * Restarting existing docker container for "embed-certs-123864" ...
	I1019 12:52:44.281101  663517 cli_runner.go:164] Run: docker start embed-certs-123864
	I1019 12:52:44.526509  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:44.546310  663517 kic.go:430] container "embed-certs-123864" state is running.
	I1019 12:52:44.546720  663517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:52:44.565833  663517 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/config.json ...
	I1019 12:52:44.566069  663517 machine.go:93] provisionDockerMachine start ...
	I1019 12:52:44.566147  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:44.585705  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:44.585938  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:44.585949  663517 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:52:44.586499  663517 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58104->127.0.0.1:33490: read: connection reset by peer
	I1019 12:52:47.734652  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-123864
	
	I1019 12:52:47.734694  663517 ubuntu.go:182] provisioning hostname "embed-certs-123864"
	I1019 12:52:47.734763  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:47.754305  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:47.754574  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:47.754594  663517 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-123864 && echo "embed-certs-123864" | sudo tee /etc/hostname
	I1019 12:52:47.900303  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-123864
	
	I1019 12:52:47.900379  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:47.918114  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:47.918334  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:47.918355  663517 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-123864' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-123864/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-123864' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:52:48.051196  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:52:48.051226  663517 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:52:48.051276  663517 ubuntu.go:190] setting up certificates
	I1019 12:52:48.051294  663517 provision.go:84] configureAuth start
	I1019 12:52:48.051351  663517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:52:48.069277  663517 provision.go:143] copyHostCerts
	I1019 12:52:48.069333  663517 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:52:48.069349  663517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:52:48.069433  663517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:52:48.069546  663517 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:52:48.069557  663517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:52:48.069604  663517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:52:48.069660  663517 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:52:48.069667  663517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:52:48.069692  663517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:52:48.069741  663517 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.embed-certs-123864 san=[127.0.0.1 192.168.76.2 embed-certs-123864 localhost minikube]
	I1019 12:52:48.585780  663517 provision.go:177] copyRemoteCerts
	I1019 12:52:48.585838  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:52:48.585871  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:48.604279  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:48.702233  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:52:48.720721  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:52:48.738512  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:52:48.755942  663517 provision.go:87] duration metric: took 704.627825ms to configureAuth
	I1019 12:52:48.755977  663517 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:52:48.756154  663517 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:48.756278  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:48.775133  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:48.775433  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:48.775459  663517 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:52:49.061359  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:52:49.061389  663517 machine.go:96] duration metric: took 4.495303282s to provisionDockerMachine
	I1019 12:52:49.061401  663517 start.go:293] postStartSetup for "embed-certs-123864" (driver="docker")
	I1019 12:52:49.061414  663517 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:52:49.061511  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:52:49.061564  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:47.787829  657553 pod_ready.go:94] pod "coredns-66bc5c9577-pgxlp" is "Ready"
	I1019 12:52:47.787855  657553 pod_ready.go:86] duration metric: took 31.504899877s for pod "coredns-66bc5c9577-pgxlp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.789711  657553 pod_ready.go:83] waiting for pod "etcd-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.793406  657553 pod_ready.go:94] pod "etcd-no-preload-561408" is "Ready"
	I1019 12:52:47.793446  657553 pod_ready.go:86] duration metric: took 3.709623ms for pod "etcd-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.795182  657553 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.798678  657553 pod_ready.go:94] pod "kube-apiserver-no-preload-561408" is "Ready"
	I1019 12:52:47.798700  657553 pod_ready.go:86] duration metric: took 3.496714ms for pod "kube-apiserver-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.800596  657553 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.986813  657553 pod_ready.go:94] pod "kube-controller-manager-no-preload-561408" is "Ready"
	I1019 12:52:47.986842  657553 pod_ready.go:86] duration metric: took 186.220802ms for pod "kube-controller-manager-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.186670  657553 pod_ready.go:83] waiting for pod "kube-proxy-lppwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.586865  657553 pod_ready.go:94] pod "kube-proxy-lppwp" is "Ready"
	I1019 12:52:48.586892  657553 pod_ready.go:86] duration metric: took 400.184165ms for pod "kube-proxy-lppwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.785758  657553 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.186913  657553 pod_ready.go:94] pod "kube-scheduler-no-preload-561408" is "Ready"
	I1019 12:52:49.186953  657553 pod_ready.go:86] duration metric: took 401.160394ms for pod "kube-scheduler-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.186968  657553 pod_ready.go:40] duration metric: took 32.907293647s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:49.233509  657553 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:52:49.235163  657553 out.go:179] * Done! kubectl is now configured to use "no-preload-561408" cluster and "default" namespace by default
	W1019 12:52:47.528927  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	I1019 12:52:48.027407  655442 pod_ready.go:94] pod "coredns-5dd5756b68-44mqv" is "Ready"
	I1019 12:52:48.027445  655442 pod_ready.go:86] duration metric: took 40.505181601s for pod "coredns-5dd5756b68-44mqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.030160  655442 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.034042  655442 pod_ready.go:94] pod "etcd-old-k8s-version-577062" is "Ready"
	I1019 12:52:48.034071  655442 pod_ready.go:86] duration metric: took 3.888307ms for pod "etcd-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.036741  655442 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.040245  655442 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-577062" is "Ready"
	I1019 12:52:48.040263  655442 pod_ready.go:86] duration metric: took 3.503128ms for pod "kube-apiserver-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.042393  655442 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.225329  655442 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-577062" is "Ready"
	I1019 12:52:48.225354  655442 pod_ready.go:86] duration metric: took 182.944102ms for pod "kube-controller-manager-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.426194  655442 pod_ready.go:83] waiting for pod "kube-proxy-lhths" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.826171  655442 pod_ready.go:94] pod "kube-proxy-lhths" is "Ready"
	I1019 12:52:48.826194  655442 pod_ready.go:86] duration metric: took 399.973598ms for pod "kube-proxy-lhths" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.025864  655442 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.425023  655442 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-577062" is "Ready"
	I1019 12:52:49.425051  655442 pod_ready.go:86] duration metric: took 399.16124ms for pod "kube-scheduler-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.425063  655442 pod_ready.go:40] duration metric: took 41.909017776s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:49.471302  655442 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1019 12:52:49.473153  655442 out.go:203] 
	W1019 12:52:49.474513  655442 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1019 12:52:49.475817  655442 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1019 12:52:49.477137  655442 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-577062" cluster and "default" namespace by default
	I1019 12:52:49.080598  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.176835  663517 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:52:49.180594  663517 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:52:49.180624  663517 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:52:49.180639  663517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:52:49.180704  663517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:52:49.180802  663517 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:52:49.180915  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:52:49.188874  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:49.207471  663517 start.go:296] duration metric: took 146.052119ms for postStartSetup
	I1019 12:52:49.207569  663517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:52:49.207618  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:49.227005  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.322539  663517 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:52:49.327981  663517 fix.go:56] duration metric: took 5.066251838s for fixHost
	I1019 12:52:49.328013  663517 start.go:83] releasing machines lock for "embed-certs-123864", held for 5.066315254s
	I1019 12:52:49.328080  663517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:52:49.348437  663517 ssh_runner.go:195] Run: cat /version.json
	I1019 12:52:49.348488  663517 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:52:49.348506  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:49.348561  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:49.368071  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.368417  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.525163  663517 ssh_runner.go:195] Run: systemctl --version
	I1019 12:52:49.534330  663517 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:52:49.578043  663517 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:52:49.583920  663517 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:52:49.583993  663517 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:52:49.593384  663517 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:52:49.593406  663517 start.go:495] detecting cgroup driver to use...
	I1019 12:52:49.593463  663517 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:52:49.593523  663517 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:52:49.612003  663517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:52:49.626574  663517 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:52:49.626639  663517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:52:49.641058  663517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:52:49.653880  663517 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:52:49.736282  663517 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:52:49.834377  663517 docker.go:234] disabling docker service ...
	I1019 12:52:49.834478  663517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:52:49.850898  663517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:52:49.864746  663517 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:52:49.939108  663517 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:52:50.014260  663517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:52:50.026706  663517 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:52:50.040656  663517 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:52:50.040725  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.049794  663517 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:52:50.049857  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.058814  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.067348  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.075837  663517 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:52:50.083843  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.092439  663517 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.100689  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.109083  663517 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:52:50.116037  663517 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:52:50.123017  663517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:50.196214  663517 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:52:50.304544  663517 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:52:50.304601  663517 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:52:50.308678  663517 start.go:563] Will wait 60s for crictl version
	I1019 12:52:50.308736  663517 ssh_runner.go:195] Run: which crictl
	I1019 12:52:50.312585  663517 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:52:50.336989  663517 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:52:50.337082  663517 ssh_runner.go:195] Run: crio --version
	I1019 12:52:50.365185  663517 ssh_runner.go:195] Run: crio --version
	I1019 12:52:50.395636  663517 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:52:50.396988  663517 cli_runner.go:164] Run: docker network inspect embed-certs-123864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:52:50.414563  663517 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 12:52:50.418760  663517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:50.429343  663517 kubeadm.go:883] updating cluster {Name:embed-certs-123864 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:52:50.429499  663517 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:50.429554  663517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:50.463514  663517 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:50.463537  663517 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:52:50.463585  663517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:50.489852  663517 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:50.489884  663517 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:52:50.489897  663517 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 12:52:50.490024  663517 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-123864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:52:50.490091  663517 ssh_runner.go:195] Run: crio config
	I1019 12:52:50.540351  663517 cni.go:84] Creating CNI manager for ""
	I1019 12:52:50.540379  663517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:50.540402  663517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:52:50.540455  663517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-123864 NodeName:embed-certs-123864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:52:50.540626  663517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-123864"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:52:50.540708  663517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:52:50.548975  663517 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:52:50.549037  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:52:50.556535  663517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1019 12:52:50.569078  663517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:52:50.582078  663517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1019 12:52:50.594598  663517 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:52:50.598683  663517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:50.609655  663517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:50.691984  663517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:50.714791  663517 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864 for IP: 192.168.76.2
	I1019 12:52:50.714813  663517 certs.go:195] generating shared ca certs ...
	I1019 12:52:50.714830  663517 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:50.714977  663517 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:52:50.715024  663517 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:52:50.715035  663517 certs.go:257] generating profile certs ...
	I1019 12:52:50.715113  663517 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/client.key
	I1019 12:52:50.715153  663517 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key.ef142c6b
	I1019 12:52:50.715189  663517 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.key
	I1019 12:52:50.715286  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:52:50.715311  663517 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:52:50.715320  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:52:50.715340  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:52:50.715362  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:52:50.715384  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:52:50.715443  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:50.716041  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:52:50.735271  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:52:50.755214  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:52:50.777014  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:52:50.800199  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 12:52:50.821324  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:52:50.839279  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:52:50.856965  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:52:50.874445  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:52:50.891496  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:52:50.908559  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:52:50.927767  663517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:52:50.941573  663517 ssh_runner.go:195] Run: openssl version
	I1019 12:52:50.947724  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:52:50.956196  663517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:52:50.959953  663517 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:52:50.960001  663517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:52:50.995897  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:52:51.005114  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:52:51.013652  663517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:51.017476  663517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:51.017521  663517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:51.051306  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:52:51.059843  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:52:51.068625  663517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:52:51.072364  663517 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:52:51.072434  663517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:52:51.106768  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:52:51.115327  663517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:52:51.119266  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:52:51.155239  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:52:51.191302  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:52:51.231935  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:52:51.281478  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:52:51.335604  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:52:51.389971  663517 kubeadm.go:400] StartCluster: {Name:embed-certs-123864 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:51.390086  663517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:52:51.390161  663517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:52:51.427193  663517 cri.go:89] found id: "0d6bd37e74ce4fd54de1cf8e27fcb93f0da4eae636f80ecf509c242bba0ab6b4"
	I1019 12:52:51.427217  663517 cri.go:89] found id: "2948778c0277b5d716b5581d32565f17755bd979469128c13d911b54b47927ea"
	I1019 12:52:51.427222  663517 cri.go:89] found id: "f0fd8fcb3c6d87abb5a73bdbe32675387cdf9b39fb23cc80e3f9fcee156b57fc"
	I1019 12:52:51.427225  663517 cri.go:89] found id: "ce30ef8a95f35deb3f080b7ea813df6a93693594ac7959d6e3a0b79159f36e25"
	I1019 12:52:51.427228  663517 cri.go:89] found id: ""
	I1019 12:52:51.427267  663517 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:52:51.440120  663517 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:52:51Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:52:51.440220  663517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:52:51.449733  663517 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:52:51.449753  663517 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:52:51.449805  663517 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:52:51.458169  663517 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:52:51.459058  663517 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-123864" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:51.459546  663517 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-351705/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-123864" cluster setting kubeconfig missing "embed-certs-123864" context setting]
	I1019 12:52:51.460311  663517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:51.462264  663517 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:52:51.470636  663517 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 12:52:51.470666  663517 kubeadm.go:601] duration metric: took 20.906449ms to restartPrimaryControlPlane
	I1019 12:52:51.470676  663517 kubeadm.go:402] duration metric: took 80.715661ms to StartCluster
	I1019 12:52:51.470710  663517 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:51.470784  663517 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:51.472656  663517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:51.472905  663517 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:52:51.473029  663517 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:52:51.473122  663517 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-123864"
	I1019 12:52:51.473142  663517 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-123864"
	W1019 12:52:51.473150  663517 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:52:51.473154  663517 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:51.473167  663517 addons.go:69] Setting dashboard=true in profile "embed-certs-123864"
	I1019 12:52:51.473186  663517 addons.go:238] Setting addon dashboard=true in "embed-certs-123864"
	I1019 12:52:51.473190  663517 host.go:66] Checking if "embed-certs-123864" exists ...
	W1019 12:52:51.473196  663517 addons.go:247] addon dashboard should already be in state true
	I1019 12:52:51.473194  663517 addons.go:69] Setting default-storageclass=true in profile "embed-certs-123864"
	I1019 12:52:51.473226  663517 host.go:66] Checking if "embed-certs-123864" exists ...
	I1019 12:52:51.473225  663517 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-123864"
	I1019 12:52:51.473582  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.473805  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.473960  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.476597  663517 out.go:179] * Verifying Kubernetes components...
	I1019 12:52:51.479247  663517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:51.500794  663517 addons.go:238] Setting addon default-storageclass=true in "embed-certs-123864"
	W1019 12:52:51.500880  663517 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:52:51.500970  663517 host.go:66] Checking if "embed-certs-123864" exists ...
	I1019 12:52:51.501574  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.502354  663517 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:52:51.503126  663517 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 12:52:51.503854  663517 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:51.503891  663517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:52:51.503970  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:51.505618  663517 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 12:52:47.131514  664256 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-999693" ...
	I1019 12:52:47.131575  664256 cli_runner.go:164] Run: docker start default-k8s-diff-port-999693
	I1019 12:52:47.384629  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:47.402936  664256 kic.go:430] container "default-k8s-diff-port-999693" state is running.
	I1019 12:52:47.403379  664256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-999693
	I1019 12:52:47.423463  664256 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/config.json ...
	I1019 12:52:47.423767  664256 machine.go:93] provisionDockerMachine start ...
	I1019 12:52:47.423874  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:47.444517  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:47.444842  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:47.444866  664256 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:52:47.445518  664256 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41262->127.0.0.1:33495: read: connection reset by peer
	I1019 12:52:50.583537  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-999693
	
	I1019 12:52:50.583567  664256 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-999693"
	I1019 12:52:50.583650  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:50.604186  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:50.604410  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:50.604444  664256 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-999693 && echo "default-k8s-diff-port-999693" | sudo tee /etc/hostname
	I1019 12:52:50.751627  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-999693
	
	I1019 12:52:50.751775  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:50.773964  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:50.774248  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:50.774277  664256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-999693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-999693/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-999693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:52:50.913745  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:52:50.913786  664256 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:52:50.913836  664256 ubuntu.go:190] setting up certificates
	I1019 12:52:50.913870  664256 provision.go:84] configureAuth start
	I1019 12:52:50.913952  664256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-999693
	I1019 12:52:50.934395  664256 provision.go:143] copyHostCerts
	I1019 12:52:50.934470  664256 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:52:50.934487  664256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:52:50.934554  664256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:52:50.934664  664256 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:52:50.934673  664256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:52:50.934711  664256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:52:50.934808  664256 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:52:50.934820  664256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:52:50.934849  664256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:52:50.934971  664256 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-999693 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-999693 localhost minikube]
	I1019 12:52:51.181197  664256 provision.go:177] copyRemoteCerts
	I1019 12:52:51.181259  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:52:51.181302  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.200908  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:51.299582  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:52:51.321298  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 12:52:51.347057  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:52:51.372503  664256 provision.go:87] duration metric: took 458.610195ms to configureAuth
	I1019 12:52:51.372536  664256 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:52:51.372758  664256 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:51.372944  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.397897  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:51.398221  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:51.398253  664256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:52:51.787740  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:52:51.787770  664256 machine.go:96] duration metric: took 4.36398321s to provisionDockerMachine
	I1019 12:52:51.787784  664256 start.go:293] postStartSetup for "default-k8s-diff-port-999693" (driver="docker")
	I1019 12:52:51.787799  664256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:52:51.787891  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:52:51.787950  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.813780  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:51.920668  664256 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:52:51.925324  664256 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:52:51.925357  664256 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:52:51.925370  664256 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:52:51.925448  664256 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:52:51.925552  664256 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:52:51.925688  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:52:51.936356  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:51.957175  664256 start.go:296] duration metric: took 169.373131ms for postStartSetup
	I1019 12:52:51.957258  664256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:52:51.957327  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.980799  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:52.081065  664256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:52:52.087117  664256 fix.go:56] duration metric: took 4.974857045s for fixHost
	I1019 12:52:52.087152  664256 start.go:83] releasing machines lock for "default-k8s-diff-port-999693", held for 4.974914543s
	I1019 12:52:52.087228  664256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-999693
	I1019 12:52:52.111457  664256 ssh_runner.go:195] Run: cat /version.json
	I1019 12:52:52.111517  664256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:52:52.111598  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:52.111518  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:52.137014  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:52.137025  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:52.314908  664256 ssh_runner.go:195] Run: systemctl --version
	I1019 12:52:52.323209  664256 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:52:52.366367  664256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:52:52.371765  664256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:52:52.371833  664256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:52:52.381186  664256 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:52:52.381210  664256 start.go:495] detecting cgroup driver to use...
	I1019 12:52:52.381243  664256 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:52:52.381290  664256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:52:52.399404  664256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:52:52.414594  664256 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:52:52.414655  664256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:52:52.432231  664256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:52:52.447748  664256 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:52:52.544771  664256 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:52:52.640880  664256 docker.go:234] disabling docker service ...
	I1019 12:52:52.640958  664256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:52:52.658680  664256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:52:52.672412  664256 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:52:52.769106  664256 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:52:52.884868  664256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:52:52.906499  664256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:52:52.933714  664256 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:52:52.933784  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.948702  664256 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:52:52.948841  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.962681  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.976376  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.993092  664256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:52:53.001841  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:53.017733  664256 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:53.032955  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:53.050801  664256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:52:53.067622  664256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:52:53.083829  664256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:53.206267  664256 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:52:53.349143  664256 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:52:53.349212  664256 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:52:53.355228  664256 start.go:563] Will wait 60s for crictl version
	I1019 12:52:53.355416  664256 ssh_runner.go:195] Run: which crictl
	I1019 12:52:53.361171  664256 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:52:53.398217  664256 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:52:53.398309  664256 ssh_runner.go:195] Run: crio --version
	I1019 12:52:53.428293  664256 ssh_runner.go:195] Run: crio --version
	I1019 12:52:53.468822  664256 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:52:51.507351  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 12:52:51.507377  663517 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:52:51.507478  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:51.528518  663517 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:51.528547  663517 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:51.528609  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:51.529319  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:51.537540  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:51.560844  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:51.652064  663517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:51.659469  663517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:51.665965  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:52:51.665989  663517 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:52:51.672138  663517 node_ready.go:35] waiting up to 6m0s for node "embed-certs-123864" to be "Ready" ...
	I1019 12:52:51.685068  663517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:51.686285  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:52:51.686312  663517 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:52:51.706556  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:52:51.706583  663517 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:52:51.726874  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:52:51.726898  663517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:52:51.745384  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:52:51.745410  663517 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:52:51.761707  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:52:51.761733  663517 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:52:51.779101  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:52:51.779128  663517 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:52:51.797377  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:52:51.797405  663517 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:52:51.812263  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:51.812286  663517 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:52:51.829889  663517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:53.072809  663517 node_ready.go:49] node "embed-certs-123864" is "Ready"
	I1019 12:52:53.072851  663517 node_ready.go:38] duration metric: took 1.400666832s for node "embed-certs-123864" to be "Ready" ...
	I1019 12:52:53.072871  663517 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:53.072920  663517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:53.700121  663517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.040605714s)
	I1019 12:52:53.700176  663517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.01507119s)
	I1019 12:52:53.700245  663517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.870328808s)
	I1019 12:52:53.700294  663517 api_server.go:72] duration metric: took 2.22734911s to wait for apiserver process to appear ...
	I1019 12:52:53.700347  663517 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:53.700370  663517 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:53.702124  663517 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-123864 addons enable metrics-server
	
	I1019 12:52:53.707464  663517 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:53.707492  663517 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:53.714665  663517 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 12:52:53.716036  663517 addons.go:514] duration metric: took 2.243010209s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 12:52:53.470131  664256 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-999693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:52:53.492572  664256 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 12:52:53.498533  664256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:53.511548  664256 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:52:53.511704  664256 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:53.511776  664256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:53.554672  664256 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:53.554693  664256 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:52:53.554740  664256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:53.588812  664256 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:53.588842  664256 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:52:53.588852  664256 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1019 12:52:53.588996  664256 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-999693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:52:53.589088  664256 ssh_runner.go:195] Run: crio config
	I1019 12:52:53.643663  664256 cni.go:84] Creating CNI manager for ""
	I1019 12:52:53.643692  664256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:53.643715  664256 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:52:53.643745  664256 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-999693 NodeName:default-k8s-diff-port-999693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:52:53.643935  664256 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-999693"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:52:53.644016  664256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:52:53.652520  664256 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:52:53.652594  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:52:53.660846  664256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1019 12:52:53.674227  664256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:52:53.687240  664256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1019 12:52:53.700930  664256 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:52:53.705067  664256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:53.717166  664256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:53.801260  664256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:53.825321  664256 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693 for IP: 192.168.85.2
	I1019 12:52:53.825347  664256 certs.go:195] generating shared ca certs ...
	I1019 12:52:53.825370  664256 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:53.825553  664256 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:52:53.825597  664256 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:52:53.825608  664256 certs.go:257] generating profile certs ...
	I1019 12:52:53.825725  664256 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/client.key
	I1019 12:52:53.825803  664256 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/apiserver.key.8ef1e1bb
	I1019 12:52:53.825855  664256 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/proxy-client.key
	I1019 12:52:53.826004  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:52:53.826045  664256 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:52:53.826057  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:52:53.826084  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:52:53.826120  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:52:53.826159  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:52:53.826218  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:53.827044  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:52:53.850305  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:52:53.874056  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:52:53.900302  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:52:53.924868  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1019 12:52:53.943707  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 12:52:53.960778  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:52:53.977601  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 12:52:53.994887  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:52:54.012296  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:52:54.038626  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:52:54.063497  664256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:52:54.079249  664256 ssh_runner.go:195] Run: openssl version
	I1019 12:52:54.086057  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:52:54.097143  664256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:54.102203  664256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:54.102259  664256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:54.158908  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:52:54.169449  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:52:54.182754  664256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:52:54.188730  664256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:52:54.188802  664256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:52:54.244383  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:52:54.254644  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:52:54.263550  664256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:52:54.267515  664256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:52:54.267578  664256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:52:54.304899  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:52:54.313985  664256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:52:54.317801  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:52:54.360081  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:52:54.405761  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:52:54.464318  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:52:54.525359  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:52:54.563734  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:52:54.608045  664256 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:54.608169  664256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:52:54.608231  664256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:52:54.649470  664256 cri.go:89] found id: "7387a9f9039b6043f8b791c29478a2e313a9c1d07804c55f3bd42e18a02230e4"
	I1019 12:52:54.649495  664256 cri.go:89] found id: "dc93d8bd2fb474180164b7ca4cdad0cbca1bb12056f2ec0109f0fdd3eaff8e74"
	I1019 12:52:54.649501  664256 cri.go:89] found id: "386f63ea17ece706be504558369a24b364237cf65e614304f2e3a200660b929a"
	I1019 12:52:54.649506  664256 cri.go:89] found id: "3d2737d35156d50ddf2521cf937a27d4a3882183759b5bedf15ae21799bc69b0"
	I1019 12:52:54.649511  664256 cri.go:89] found id: ""
	I1019 12:52:54.649557  664256 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:52:54.665837  664256 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:52:54Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:52:54.665908  664256 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:52:54.677684  664256 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:52:54.677708  664256 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:52:54.677757  664256 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:52:54.687556  664256 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:52:54.689468  664256 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-999693" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:54.690566  664256 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-351705/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-999693" cluster setting kubeconfig missing "default-k8s-diff-port-999693" context setting]
	I1019 12:52:54.691940  664256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:54.694639  664256 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:52:54.705918  664256 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 12:52:54.705949  664256 kubeadm.go:601] duration metric: took 28.235813ms to restartPrimaryControlPlane
	I1019 12:52:54.705960  664256 kubeadm.go:402] duration metric: took 97.926007ms to StartCluster
	I1019 12:52:54.705977  664256 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:54.706033  664256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:54.708821  664256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:54.709325  664256 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:52:54.709463  664256 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-999693"
	I1019 12:52:54.709490  664256 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-999693"
	W1019 12:52:54.709502  664256 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:52:54.709534  664256 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:52:54.709617  664256 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:54.709548  664256 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:52:54.709808  664256 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-999693"
	I1019 12:52:54.710141  664256 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-999693"
	W1019 12:52:54.710161  664256 addons.go:247] addon dashboard should already be in state true
	I1019 12:52:54.710191  664256 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:52:54.711868  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.712514  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.709821  664256 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-999693"
	I1019 12:52:54.713522  664256 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-999693"
	I1019 12:52:54.713860  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.714625  664256 out.go:179] * Verifying Kubernetes components...
	I1019 12:52:54.715871  664256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:54.746297  664256 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 12:52:54.747517  664256 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:52:54.747552  664256 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 12:52:54.749165  664256 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-999693"
	I1019 12:52:54.749177  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	W1019 12:52:54.749186  664256 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:52:54.749191  664256 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:52:54.749216  664256 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:52:54.749232  664256 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:54.749245  664256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:52:54.749256  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:54.749306  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:54.749711  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.783580  664256 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:54.783608  664256 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:54.783676  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:54.787579  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:54.788172  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:54.817481  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:54.916555  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:52:54.916589  664256 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:52:54.918652  664256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:54.921391  664256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:54.939730  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:52:54.939840  664256 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:52:54.940294  664256 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-999693" to be "Ready" ...
	I1019 12:52:54.941172  664256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:54.960699  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:52:54.960783  664256 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:52:54.976260  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:52:54.976341  664256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:52:54.996375  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:52:54.996401  664256 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:52:55.017050  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:52:55.017079  664256 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:52:55.033603  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:52:55.033632  664256 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:52:55.048007  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:52:55.048032  664256 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:52:55.063077  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:55.063102  664256 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:52:55.078449  664256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:56.495857  664256 node_ready.go:49] node "default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:56.495897  664256 node_ready.go:38] duration metric: took 1.555549648s for node "default-k8s-diff-port-999693" to be "Ready" ...
	I1019 12:52:56.495915  664256 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:56.495982  664256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:57.096998  664256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.175567368s)
	I1019 12:52:57.097030  664256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.155826931s)
	I1019 12:52:57.097189  664256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.018704195s)
	I1019 12:52:57.097307  664256 api_server.go:72] duration metric: took 2.387607096s to wait for apiserver process to appear ...
	I1019 12:52:57.097327  664256 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:57.097348  664256 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:57.100178  664256 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-999693 addons enable metrics-server
	
	I1019 12:52:57.102943  664256 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:57.102968  664256 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:57.105461  664256 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 12:52:54.200764  663517 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:54.206405  663517 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:54.206480  663517 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:54.701368  663517 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:54.709189  663517 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 12:52:54.710714  663517 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:54.710735  663517 api_server.go:131] duration metric: took 1.010380706s to wait for apiserver health ...
	I1019 12:52:54.710745  663517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:54.721732  663517 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:54.721787  663517 system_pods.go:61] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:54.721804  663517 system_pods.go:61] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:54.721814  663517 system_pods.go:61] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:54.721826  663517 system_pods.go:61] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:54.721838  663517 system_pods.go:61] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:54.721893  663517 system_pods.go:61] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:54.721905  663517 system_pods.go:61] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:54.721926  663517 system_pods.go:61] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:54.721934  663517 system_pods.go:74] duration metric: took 11.182501ms to wait for pod list to return data ...
	I1019 12:52:54.721949  663517 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:54.728320  663517 default_sa.go:45] found service account: "default"
	I1019 12:52:54.728404  663517 default_sa.go:55] duration metric: took 6.446433ms for default service account to be created ...
	I1019 12:52:54.728450  663517 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:54.742048  663517 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:54.742087  663517 system_pods.go:89] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:54.742747  663517 system_pods.go:89] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:54.743381  663517 system_pods.go:89] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:54.743410  663517 system_pods.go:89] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:54.743900  663517 system_pods.go:89] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:54.744078  663517 system_pods.go:89] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:54.744455  663517 system_pods.go:89] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:54.744805  663517 system_pods.go:89] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:54.744821  663517 system_pods.go:126] duration metric: took 16.360253ms to wait for k8s-apps to be running ...
	I1019 12:52:54.745172  663517 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:54.745631  663517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:54.769658  663517 system_svc.go:56] duration metric: took 24.811398ms WaitForService to wait for kubelet
	I1019 12:52:54.769727  663517 kubeadm.go:586] duration metric: took 3.296760449s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:54.769750  663517 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:54.773633  663517 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:54.773745  663517 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:54.773776  663517 node_conditions.go:105] duration metric: took 4.019851ms to run NodePressure ...
	I1019 12:52:54.773995  663517 start.go:241] waiting for startup goroutines ...
	I1019 12:52:54.774026  663517 start.go:246] waiting for cluster config update ...
	I1019 12:52:54.774043  663517 start.go:255] writing updated cluster config ...
	I1019 12:52:54.774837  663517 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:54.781544  663517 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:54.790057  663517 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:52:56.796654  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:52:57.109849  664256 addons.go:514] duration metric: took 2.400528693s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 12:52:57.598353  664256 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:57.604765  664256 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:57.604814  664256 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:58.098137  664256 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:58.103228  664256 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1019 12:52:58.104494  664256 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:58.104523  664256 api_server.go:131] duration metric: took 1.007188483s to wait for apiserver health ...
	I1019 12:52:58.104535  664256 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:58.108083  664256 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:58.108110  664256 system_pods.go:61] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:58.108118  664256 system_pods.go:61] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:58.108124  664256 system_pods.go:61] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:58.108130  664256 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:58.108142  664256 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:58.108150  664256 system_pods.go:61] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:58.108159  664256 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:58.108168  664256 system_pods.go:61] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Running
	I1019 12:52:58.108179  664256 system_pods.go:74] duration metric: took 3.637436ms to wait for pod list to return data ...
	I1019 12:52:58.108192  664256 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:58.110578  664256 default_sa.go:45] found service account: "default"
	I1019 12:52:58.110596  664256 default_sa.go:55] duration metric: took 2.39546ms for default service account to be created ...
	I1019 12:52:58.110604  664256 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:58.113444  664256 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:58.113473  664256 system_pods.go:89] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:58.113485  664256 system_pods.go:89] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:58.113496  664256 system_pods.go:89] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:58.113516  664256 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:58.113527  664256 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:58.113534  664256 system_pods.go:89] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:58.113539  664256 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:58.113545  664256 system_pods.go:89] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Running
	I1019 12:52:58.113553  664256 system_pods.go:126] duration metric: took 2.943742ms to wait for k8s-apps to be running ...
	I1019 12:52:58.113563  664256 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:58.113613  664256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:58.128579  664256 system_svc.go:56] duration metric: took 15.004824ms WaitForService to wait for kubelet
	I1019 12:52:58.128609  664256 kubeadm.go:586] duration metric: took 3.418911937s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:58.128632  664256 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:58.131784  664256 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:58.131819  664256 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:58.131832  664256 node_conditions.go:105] duration metric: took 3.194851ms to run NodePressure ...
	I1019 12:52:58.131843  664256 start.go:241] waiting for startup goroutines ...
	I1019 12:52:58.131850  664256 start.go:246] waiting for cluster config update ...
	I1019 12:52:58.131862  664256 start.go:255] writing updated cluster config ...
	I1019 12:52:58.132300  664256 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:58.136574  664256 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:58.140912  664256 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:53:00.147567  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 19 12:52:26 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:26.295890111Z" level=info msg="Started container" PID=1726 containerID=b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb/dashboard-metrics-scraper id=9b6488e0-33d1-4a21-b97e-d8fa282eb3da name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ce503128b1c053d63a2dc142585ed9cf38b2b6920892ae9ea67fad6fc68278b
	Oct 19 12:52:27 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:27.249019147Z" level=info msg="Removing container: b57cfe227ad0bcc297fb550d2ba0c9dab9af664d38a4b99b249229e327067f7c" id=d562f866-c216-48f1-a20c-772955422dba name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:52:27 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:27.26055677Z" level=info msg="Removed container b57cfe227ad0bcc297fb550d2ba0c9dab9af664d38a4b99b249229e327067f7c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb/dashboard-metrics-scraper" id=d562f866-c216-48f1-a20c-772955422dba name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.276789662Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2353e3d3-7b63-4bb0-9bbd-57866ce14963 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.277721855Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5eb1c1d4-329e-4006-90d5-86b31b4983f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.278666589Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1a25572f-a472-43e4-9fc0-e97e46ce0b2f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.278955677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.283211829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.283385638Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e8bd0b09ce8a2a1292e6982a1d9402a90c9f199b83fb96412238ff3cf520766a/merged/etc/passwd: no such file or directory"
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.283455532Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e8bd0b09ce8a2a1292e6982a1d9402a90c9f199b83fb96412238ff3cf520766a/merged/etc/group: no such file or directory"
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.283775619Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.309915989Z" level=info msg="Created container b22cedaa72f076deb56a9e65cbf65d4fedd7743c72f9de44745670d3da78cd44: kube-system/storage-provisioner/storage-provisioner" id=1a25572f-a472-43e4-9fc0-e97e46ce0b2f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.310733427Z" level=info msg="Starting container: b22cedaa72f076deb56a9e65cbf65d4fedd7743c72f9de44745670d3da78cd44" id=86eea788-22fd-4228-8d21-92fd1a55a22c name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.312743155Z" level=info msg="Started container" PID=1740 containerID=b22cedaa72f076deb56a9e65cbf65d4fedd7743c72f9de44745670d3da78cd44 description=kube-system/storage-provisioner/storage-provisioner id=86eea788-22fd-4228-8d21-92fd1a55a22c name=/runtime.v1.RuntimeService/StartContainer sandboxID=4658e3fcca3594b584c6308ecbc62da5028f9fe2979e8db9d54cfc50cfdb93ff
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.165881192Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dae67251-d373-470e-a6c7-de56d3eecb1a name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.166866933Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67289d87-af30-488e-bd86-4f6cd8f87950 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.167808757Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb/dashboard-metrics-scraper" id=32830c0b-0813-4fa8-a9d4-18ebbce16606 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.168033954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.173751882Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.174395924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.201953355Z" level=info msg="Created container 29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb/dashboard-metrics-scraper" id=32830c0b-0813-4fa8-a9d4-18ebbce16606 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.202609162Z" level=info msg="Starting container: 29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b" id=50e0fcbd-36ea-4a57-9ca6-b6c117447b52 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.204320014Z" level=info msg="Started container" PID=1755 containerID=29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb/dashboard-metrics-scraper id=50e0fcbd-36ea-4a57-9ca6-b6c117447b52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ce503128b1c053d63a2dc142585ed9cf38b2b6920892ae9ea67fad6fc68278b
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.298389433Z" level=info msg="Removing container: b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021" id=adf2be1a-dc7b-485a-9133-0051d73fce00 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.307844523Z" level=info msg="Removed container b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb/dashboard-metrics-scraper" id=adf2be1a-dc7b-485a-9133-0051d73fce00 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	29b71e817f4ea       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   2ce503128b1c0       dashboard-metrics-scraper-5f989dc9cf-kx2tb       kubernetes-dashboard
	b22cedaa72f07       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago       Running             storage-provisioner         1                   4658e3fcca359       storage-provisioner                              kube-system
	141891d9bcecd       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago       Running             kubernetes-dashboard        0                   9eef0afbabf70       kubernetes-dashboard-8694d4445c-4xrjh            kubernetes-dashboard
	831f176d66e63       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           56 seconds ago       Running             coredns                     0                   46899d6103082       coredns-5dd5756b68-44mqv                         kube-system
	1644ce12959f7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago       Running             busybox                     1                   1278fa0581229       busybox                                          default
	bca9cb8e7e1a4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago       Running             kindnet-cni                 0                   f25a220960201       kindnet-2h26b                                    kube-system
	e9c3dda964119       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           56 seconds ago       Running             kube-proxy                  0                   d48b279492427       kube-proxy-lhths                                 kube-system
	a9a54186737cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago       Exited              storage-provisioner         0                   4658e3fcca359       storage-provisioner                              kube-system
	ba25f6a999b0c       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   5e6b6fc78f636       kube-apiserver-old-k8s-version-577062            kube-system
	fbf4c9d76e1db       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   e3ff6ccb73e03       kube-controller-manager-old-k8s-version-577062   kube-system
	8577c744298fa       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   648f572919b9d       kube-scheduler-old-k8s-version-577062            kube-system
	2c9fe6c9b1b32       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   b2633e090834e       etcd-old-k8s-version-577062                      kube-system
	
	
	==> coredns [831f176d66e63a51f4bc180ce401d4ecda5e783f443e4ffd91216fd1999c8eef] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52856 - 28719 "HINFO IN 8134314610029088256.8191675844325686558. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.085771502s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-577062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-577062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=old-k8s-version-577062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_50_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:50:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-577062
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:52:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:52:36 +0000   Sun, 19 Oct 2025 12:50:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:52:36 +0000   Sun, 19 Oct 2025 12:50:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:52:36 +0000   Sun, 19 Oct 2025 12:50:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:52:36 +0000   Sun, 19 Oct 2025 12:51:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-577062
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                bfa1b0a1-e61a-4552-82c8-d6cc29922f2a
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-44mqv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-old-k8s-version-577062                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m5s
	  kube-system                 kindnet-2h26b                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-577062             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-577062    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-lhths                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-577062             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-kx2tb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4xrjh             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s               kubelet          Node old-k8s-version-577062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s               kubelet          Node old-k8s-version-577062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s               kubelet          Node old-k8s-version-577062 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s               node-controller  Node old-k8s-version-577062 event: Registered Node old-k8s-version-577062 in Controller
	  Normal  NodeReady                98s                kubelet          Node old-k8s-version-577062 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node old-k8s-version-577062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node old-k8s-version-577062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node old-k8s-version-577062 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-577062 event: Registered Node old-k8s-version-577062 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [2c9fe6c9b1b32926f91a1bde357e191e5e1e3b8139fa61a8202db438bcecf6d3] <==
	{"level":"info","ts":"2025-10-19T12:52:03.725582Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T12:52:03.725594Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T12:52:03.725875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-10-19T12:52:03.725952Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-10-19T12:52:03.726056Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T12:52:03.72609Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T12:52:03.728072Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-19T12:52:03.72835Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T12:52:03.728416Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T12:52:03.729618Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-19T12:52:03.729684Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-19T12:52:05.016492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-19T12:52:05.016545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-19T12:52:05.016581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-19T12:52:05.016592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-19T12:52:05.016598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-19T12:52:05.016606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-19T12:52:05.016613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-19T12:52:05.018407Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-577062 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T12:52:05.018447Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T12:52:05.018416Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T12:52:05.018637Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T12:52:05.018669Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-19T12:52:05.01949Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-19T12:52:05.019687Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:53:04 up  2:35,  0 user,  load average: 4.67, 4.81, 3.10
	Linux old-k8s-version-577062 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bca9cb8e7e1a4789fce59ad4a5788c1e7058d9f9e7ec1057f342040b015717bc] <==
	I1019 12:52:07.727238       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:52:07.727524       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1019 12:52:07.727697       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:52:07.727717       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:52:07.727742       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:52:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:52:07.926955       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:52:07.926986       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:52:07.927001       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:52:08.124622       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:52:08.227779       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:52:08.227800       1 metrics.go:72] Registering metrics
	I1019 12:52:08.227860       1 controller.go:711] "Syncing nftables rules"
	I1019 12:52:17.927505       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 12:52:17.927589       1 main.go:301] handling current node
	I1019 12:52:27.927229       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 12:52:27.927257       1 main.go:301] handling current node
	I1019 12:52:37.927588       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 12:52:37.927643       1 main.go:301] handling current node
	I1019 12:52:47.927651       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 12:52:47.927692       1 main.go:301] handling current node
	I1019 12:52:57.934500       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 12:52:57.934545       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ba25f6a999b0c5ae02f451d523de313de12a4d3d20296a8becbbee6fa1a54b92] <==
	I1019 12:52:06.323706       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1019 12:52:06.331726       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1019 12:52:06.339986       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:52:06.376101       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1019 12:52:06.376141       1 aggregator.go:166] initial CRD sync complete...
	I1019 12:52:06.376156       1 autoregister_controller.go:141] Starting autoregister controller
	I1019 12:52:06.376165       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 12:52:06.376176       1 cache.go:39] Caches are synced for autoregister controller
	I1019 12:52:06.421879       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1019 12:52:06.422004       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1019 12:52:06.421894       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1019 12:52:06.422256       1 shared_informer.go:318] Caches are synced for configmaps
	I1019 12:52:06.423872       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 12:52:07.227784       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:52:07.340024       1 controller.go:624] quota admission added evaluator for: namespaces
	I1019 12:52:07.370988       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1019 12:52:07.389012       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:52:07.397441       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:52:07.404892       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1019 12:52:07.439392       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.75.19"}
	I1019 12:52:07.456952       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.12.204"}
	I1019 12:52:18.734320       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:52:18.806743       1 controller.go:624] quota admission added evaluator for: endpoints
	I1019 12:52:18.806744       1 controller.go:624] quota admission added evaluator for: endpoints
	I1019 12:52:18.983184       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fbf4c9d76e1dbee5411f82439799eddfa94579d729009e817ab32efa62aa037b] <==
	I1019 12:52:18.987514       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1019 12:52:18.987618       1 shared_informer.go:318] Caches are synced for resource quota
	I1019 12:52:18.988955       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1019 12:52:18.997992       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-4xrjh"
	I1019 12:52:18.998855       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-kx2tb"
	I1019 12:52:19.006704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.315897ms"
	I1019 12:52:19.007018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.794631ms"
	I1019 12:52:19.012596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.769031ms"
	I1019 12:52:19.012682       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.512µs"
	I1019 12:52:19.014691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.130509ms"
	I1019 12:52:19.025893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="35.766µs"
	I1019 12:52:19.026264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="11.490846ms"
	I1019 12:52:19.026344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.245µs"
	I1019 12:52:19.303886       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 12:52:19.331958       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 12:52:19.331994       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1019 12:52:23.267968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.552936ms"
	I1019 12:52:23.268592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="101.374µs"
	I1019 12:52:26.254211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="126.751µs"
	I1019 12:52:27.259886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.096µs"
	I1019 12:52:28.260403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.37µs"
	I1019 12:52:45.308669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.894µs"
	I1019 12:52:47.575941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.251184ms"
	I1019 12:52:47.576261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.431µs"
	I1019 12:52:49.317385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.947µs"
	
	
	==> kube-proxy [e9c3dda964119fe6efea193da287473cefe468088e2bca9f9cf19321e2a8bfeb] <==
	I1019 12:52:07.580491       1 server_others.go:69] "Using iptables proxy"
	I1019 12:52:07.588813       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1019 12:52:07.605913       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:52:07.608334       1 server_others.go:152] "Using iptables Proxier"
	I1019 12:52:07.608361       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1019 12:52:07.608366       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1019 12:52:07.608393       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1019 12:52:07.608737       1 server.go:846] "Version info" version="v1.28.0"
	I1019 12:52:07.608755       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:07.609766       1 config.go:315] "Starting node config controller"
	I1019 12:52:07.609803       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1019 12:52:07.609816       1 config.go:188] "Starting service config controller"
	I1019 12:52:07.609849       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1019 12:52:07.609361       1 config.go:97] "Starting endpoint slice config controller"
	I1019 12:52:07.610040       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1019 12:52:07.709977       1 shared_informer.go:318] Caches are synced for node config
	I1019 12:52:07.710075       1 shared_informer.go:318] Caches are synced for service config
	I1019 12:52:07.710399       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8577c744298fa841bb6cdfc8e4e7b5ca9854b6075ef4d4ee96ca794f243de677] <==
	W1019 12:52:06.327806       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.327831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.329043       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.329072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.329443       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1019 12:52:06.330926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W1019 12:52:06.330058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.330983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.330273       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E1019 12:52:06.331003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W1019 12:52:06.330444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.331022       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.330580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.331044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.331352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.331406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.332785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W1019 12:52:06.332974       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.333145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.332919       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.333271       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.333217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W1019 12:52:06.336658       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E1019 12:52:06.336690       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	I1019 12:52:06.418647       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 12:52:19 old-k8s-version-577062 kubelet[715]: I1019 12:52:19.059784     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/319a68f4-f2f5-4163-af82-7420a9bd1a41-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-4xrjh\" (UID: \"319a68f4-f2f5-4163-af82-7420a9bd1a41\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4xrjh"
	Oct 19 12:52:19 old-k8s-version-577062 kubelet[715]: I1019 12:52:19.059818     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5hqb\" (UniqueName: \"kubernetes.io/projected/a2b1a6c1-1690-476d-972a-fac12a8b3d1f-kube-api-access-l5hqb\") pod \"dashboard-metrics-scraper-5f989dc9cf-kx2tb\" (UID: \"a2b1a6c1-1690-476d-972a-fac12a8b3d1f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb"
	Oct 19 12:52:19 old-k8s-version-577062 kubelet[715]: I1019 12:52:19.059950     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a2b1a6c1-1690-476d-972a-fac12a8b3d1f-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-kx2tb\" (UID: \"a2b1a6c1-1690-476d-972a-fac12a8b3d1f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb"
	Oct 19 12:52:23 old-k8s-version-577062 kubelet[715]: I1019 12:52:23.253438     715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4xrjh" podStartSLOduration=1.512541827 podCreationTimestamp="2025-10-19 12:52:18 +0000 UTC" firstStartedPulling="2025-10-19 12:52:19.328800685 +0000 UTC m=+16.257529503" lastFinishedPulling="2025-10-19 12:52:23.069597706 +0000 UTC m=+19.998326536" observedRunningTime="2025-10-19 12:52:23.253252401 +0000 UTC m=+20.181981240" watchObservedRunningTime="2025-10-19 12:52:23.25333886 +0000 UTC m=+20.182067697"
	Oct 19 12:52:26 old-k8s-version-577062 kubelet[715]: I1019 12:52:26.243156     715 scope.go:117] "RemoveContainer" containerID="b57cfe227ad0bcc297fb550d2ba0c9dab9af664d38a4b99b249229e327067f7c"
	Oct 19 12:52:27 old-k8s-version-577062 kubelet[715]: I1019 12:52:27.247609     715 scope.go:117] "RemoveContainer" containerID="b57cfe227ad0bcc297fb550d2ba0c9dab9af664d38a4b99b249229e327067f7c"
	Oct 19 12:52:27 old-k8s-version-577062 kubelet[715]: I1019 12:52:27.247803     715 scope.go:117] "RemoveContainer" containerID="b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021"
	Oct 19 12:52:27 old-k8s-version-577062 kubelet[715]: E1019 12:52:27.248186     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kx2tb_kubernetes-dashboard(a2b1a6c1-1690-476d-972a-fac12a8b3d1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb" podUID="a2b1a6c1-1690-476d-972a-fac12a8b3d1f"
	Oct 19 12:52:28 old-k8s-version-577062 kubelet[715]: I1019 12:52:28.251105     715 scope.go:117] "RemoveContainer" containerID="b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021"
	Oct 19 12:52:28 old-k8s-version-577062 kubelet[715]: E1019 12:52:28.251384     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kx2tb_kubernetes-dashboard(a2b1a6c1-1690-476d-972a-fac12a8b3d1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb" podUID="a2b1a6c1-1690-476d-972a-fac12a8b3d1f"
	Oct 19 12:52:29 old-k8s-version-577062 kubelet[715]: I1019 12:52:29.305992     715 scope.go:117] "RemoveContainer" containerID="b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021"
	Oct 19 12:52:29 old-k8s-version-577062 kubelet[715]: E1019 12:52:29.306384     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kx2tb_kubernetes-dashboard(a2b1a6c1-1690-476d-972a-fac12a8b3d1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb" podUID="a2b1a6c1-1690-476d-972a-fac12a8b3d1f"
	Oct 19 12:52:38 old-k8s-version-577062 kubelet[715]: I1019 12:52:38.276273     715 scope.go:117] "RemoveContainer" containerID="a9a54186737cc9a1243f50a29cf83a48c7326a7fa8a8c9b9f0a830c882f6d33f"
	Oct 19 12:52:45 old-k8s-version-577062 kubelet[715]: I1019 12:52:45.165257     715 scope.go:117] "RemoveContainer" containerID="b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021"
	Oct 19 12:52:45 old-k8s-version-577062 kubelet[715]: I1019 12:52:45.297171     715 scope.go:117] "RemoveContainer" containerID="b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021"
	Oct 19 12:52:45 old-k8s-version-577062 kubelet[715]: I1019 12:52:45.297511     715 scope.go:117] "RemoveContainer" containerID="29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b"
	Oct 19 12:52:45 old-k8s-version-577062 kubelet[715]: E1019 12:52:45.297863     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kx2tb_kubernetes-dashboard(a2b1a6c1-1690-476d-972a-fac12a8b3d1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb" podUID="a2b1a6c1-1690-476d-972a-fac12a8b3d1f"
	Oct 19 12:52:49 old-k8s-version-577062 kubelet[715]: I1019 12:52:49.306242     715 scope.go:117] "RemoveContainer" containerID="29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b"
	Oct 19 12:52:49 old-k8s-version-577062 kubelet[715]: E1019 12:52:49.306639     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kx2tb_kubernetes-dashboard(a2b1a6c1-1690-476d-972a-fac12a8b3d1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb" podUID="a2b1a6c1-1690-476d-972a-fac12a8b3d1f"
	Oct 19 12:53:00 old-k8s-version-577062 kubelet[715]: I1019 12:53:00.165105     715 scope.go:117] "RemoveContainer" containerID="29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b"
	Oct 19 12:53:00 old-k8s-version-577062 kubelet[715]: E1019 12:53:00.165534     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kx2tb_kubernetes-dashboard(a2b1a6c1-1690-476d-972a-fac12a8b3d1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb" podUID="a2b1a6c1-1690-476d-972a-fac12a8b3d1f"
	Oct 19 12:53:00 old-k8s-version-577062 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 12:53:00 old-k8s-version-577062 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 12:53:00 old-k8s-version-577062 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 12:53:00 old-k8s-version-577062 systemd[1]: kubelet.service: Consumed 1.612s CPU time.
	
	
	==> kubernetes-dashboard [141891d9bcecd7b8f29e6a840f8c01c263be938405ca6b55629648a298625543] <==
	2025/10/19 12:52:23 Using namespace: kubernetes-dashboard
	2025/10/19 12:52:23 Using in-cluster config to connect to apiserver
	2025/10/19 12:52:23 Using secret token for csrf signing
	2025/10/19 12:52:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 12:52:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 12:52:23 Successful initial request to the apiserver, version: v1.28.0
	2025/10/19 12:52:23 Generating JWE encryption key
	2025/10/19 12:52:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 12:52:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 12:52:23 Initializing JWE encryption key from synchronized object
	2025/10/19 12:52:23 Creating in-cluster Sidecar client
	2025/10/19 12:52:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 12:52:23 Serving insecurely on HTTP port: 9090
	2025/10/19 12:52:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 12:52:23 Starting overwatch
	
	
	==> storage-provisioner [a9a54186737cc9a1243f50a29cf83a48c7326a7fa8a8c9b9f0a830c882f6d33f] <==
	I1019 12:52:07.544833       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 12:52:37.549447       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b22cedaa72f076deb56a9e65cbf65d4fedd7743c72f9de44745670d3da78cd44] <==
	I1019 12:52:38.324150       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:52:38.331599       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:52:38.331634       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 12:52:55.729527       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:52:55.729596       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8829309e-ce84-4b37-8b7e-53ec540533f6", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-577062_abc5fd64-6d0b-45bb-8a5a-6904b511212b became leader
	I1019 12:52:55.729678       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-577062_abc5fd64-6d0b-45bb-8a5a-6904b511212b!
	I1019 12:52:55.830929       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-577062_abc5fd64-6d0b-45bb-8a5a-6904b511212b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-577062 -n old-k8s-version-577062
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-577062 -n old-k8s-version-577062: exit status 2 (374.891979ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-577062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-577062
helpers_test.go:243: (dbg) docker inspect old-k8s-version-577062:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da",
	        "Created": "2025-10-19T12:50:42.983195608Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 655637,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:51:57.060341737Z",
	            "FinishedAt": "2025-10-19T12:51:56.143621748Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da/hostname",
	        "HostsPath": "/var/lib/docker/containers/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da/hosts",
	        "LogPath": "/var/lib/docker/containers/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da/368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da-json.log",
	        "Name": "/old-k8s-version-577062",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-577062:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-577062",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "368928979a1743039f83bb6e976b19a4ebd4f4437727ffab368c86c1dc88a5da",
	                "LowerDir": "/var/lib/docker/overlay2/ad482f3956284773e120f9065cdd7f07802861d1771e61bb563b338ed1005a40-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad482f3956284773e120f9065cdd7f07802861d1771e61bb563b338ed1005a40/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad482f3956284773e120f9065cdd7f07802861d1771e61bb563b338ed1005a40/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad482f3956284773e120f9065cdd7f07802861d1771e61bb563b338ed1005a40/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-577062",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-577062/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-577062",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-577062",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-577062",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7ba102986ccec709fe88a6b60c1d89d7d3e8d3623ff784198d3d0477dd33e85c",
	            "SandboxKey": "/var/run/docker/netns/7ba102986cce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-577062": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:cf:7e:e7:d9:a8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "502db93731f3c65b158cfaea0389f311a4314988a15a727b3ce6c492ca19cd92",
	                    "EndpointID": "36f1415fd6b4505712ea9dedcb743451dc54335b49d3ae816a9dcd0a88c25554",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-577062",
	                        "368928979a17"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-577062 -n old-k8s-version-577062
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-577062 -n old-k8s-version-577062: exit status 2 (398.909634ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-577062 logs -n 25
E1019 12:53:05.452339  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/auto-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:53:05.459186  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/auto-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:53:05.470627  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/auto-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:53:05.492744  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/auto-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-577062 logs -n 25: (1.466583738s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-931932 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-577062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo crio config                                                                                                                                                                                                             │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p bridge-931932                                                                                                                                                                                                                              │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ stop    │ -p old-k8s-version-577062 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p disable-driver-mounts-591165                                                                                                                                                                                                               │ disable-driver-mounts-591165 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-561408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ stop    │ -p no-preload-561408 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-577062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p no-preload-561408 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p embed-certs-123864 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-999693 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-123864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-999693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ image   │ old-k8s-version-577062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p old-k8s-version-577062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ no-preload-561408 image list --format=json                                                                                                                                                                                                    │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p no-preload-561408 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:52:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:52:46.925201  664256 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:52:46.925511  664256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:52:46.925521  664256 out.go:374] Setting ErrFile to fd 2...
	I1019 12:52:46.925526  664256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:52:46.925724  664256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:52:46.926177  664256 out.go:368] Setting JSON to false
	I1019 12:52:46.927476  664256 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9315,"bootTime":1760869052,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:52:46.927572  664256 start.go:141] virtualization: kvm guest
	I1019 12:52:46.929196  664256 out.go:179] * [default-k8s-diff-port-999693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:52:46.930756  664256 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:52:46.930801  664256 notify.go:220] Checking for updates...
	I1019 12:52:46.932758  664256 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:52:46.934048  664256 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:46.935192  664256 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:52:46.936498  664256 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:52:46.937762  664256 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:52:46.939394  664256 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:46.939848  664256 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:52:46.963683  664256 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:52:46.963772  664256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:52:47.023378  664256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 12:52:47.013329476 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:52:47.023535  664256 docker.go:318] overlay module found
	I1019 12:52:47.025269  664256 out.go:179] * Using the docker driver based on existing profile
	I1019 12:52:47.026568  664256 start.go:305] selected driver: docker
	I1019 12:52:47.026597  664256 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:47.026732  664256 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:52:47.027471  664256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:52:47.086363  664256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 12:52:47.076802932 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:52:47.086679  664256 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:47.086707  664256 cni.go:84] Creating CNI manager for ""
	I1019 12:52:47.086755  664256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:47.086787  664256 start.go:349] cluster config:
	{Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:47.088476  664256 out.go:179] * Starting "default-k8s-diff-port-999693" primary control-plane node in "default-k8s-diff-port-999693" cluster
	I1019 12:52:47.089564  664256 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:52:47.090727  664256 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:52:47.091742  664256 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:47.091773  664256 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:52:47.091781  664256 cache.go:58] Caching tarball of preloaded images
	I1019 12:52:47.091796  664256 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:52:47.091859  664256 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:52:47.091870  664256 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:52:47.091959  664256 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/config.json ...
	I1019 12:52:47.112105  664256 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:52:47.112128  664256 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:52:47.112142  664256 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:52:47.112172  664256 start.go:360] acquireMachinesLock for default-k8s-diff-port-999693: {Name:mke26e7439408c8adecea1bbb9344a31dd77b3c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:47.112226  664256 start.go:364] duration metric: took 36.455µs to acquireMachinesLock for "default-k8s-diff-port-999693"
	I1019 12:52:47.112245  664256 start.go:96] Skipping create...Using existing machine configuration
	I1019 12:52:47.112252  664256 fix.go:54] fixHost starting: 
	I1019 12:52:47.112490  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:47.129772  664256 fix.go:112] recreateIfNeeded on default-k8s-diff-port-999693: state=Stopped err=<nil>
	W1019 12:52:47.129802  664256 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 12:52:44.281015  663517 out.go:252] * Restarting existing docker container for "embed-certs-123864" ...
	I1019 12:52:44.281101  663517 cli_runner.go:164] Run: docker start embed-certs-123864
	I1019 12:52:44.526509  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:44.546310  663517 kic.go:430] container "embed-certs-123864" state is running.
	I1019 12:52:44.546720  663517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:52:44.565833  663517 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/config.json ...
	I1019 12:52:44.566069  663517 machine.go:93] provisionDockerMachine start ...
	I1019 12:52:44.566147  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:44.585705  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:44.585938  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:44.585949  663517 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:52:44.586499  663517 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58104->127.0.0.1:33490: read: connection reset by peer
	I1019 12:52:47.734652  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-123864
	
	I1019 12:52:47.734694  663517 ubuntu.go:182] provisioning hostname "embed-certs-123864"
	I1019 12:52:47.734763  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:47.754305  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:47.754574  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:47.754594  663517 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-123864 && echo "embed-certs-123864" | sudo tee /etc/hostname
	I1019 12:52:47.900303  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-123864
	
	I1019 12:52:47.900379  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:47.918114  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:47.918334  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:47.918355  663517 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-123864' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-123864/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-123864' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:52:48.051196  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:52:48.051226  663517 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:52:48.051276  663517 ubuntu.go:190] setting up certificates
	I1019 12:52:48.051294  663517 provision.go:84] configureAuth start
	I1019 12:52:48.051351  663517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:52:48.069277  663517 provision.go:143] copyHostCerts
	I1019 12:52:48.069333  663517 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:52:48.069349  663517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:52:48.069433  663517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:52:48.069546  663517 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:52:48.069557  663517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:52:48.069604  663517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:52:48.069660  663517 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:52:48.069667  663517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:52:48.069692  663517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:52:48.069741  663517 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.embed-certs-123864 san=[127.0.0.1 192.168.76.2 embed-certs-123864 localhost minikube]
	I1019 12:52:48.585780  663517 provision.go:177] copyRemoteCerts
	I1019 12:52:48.585838  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:52:48.585871  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:48.604279  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:48.702233  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:52:48.720721  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:52:48.738512  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:52:48.755942  663517 provision.go:87] duration metric: took 704.627825ms to configureAuth
	I1019 12:52:48.755977  663517 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:52:48.756154  663517 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:48.756278  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:48.775133  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:48.775433  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:48.775459  663517 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:52:49.061359  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:52:49.061389  663517 machine.go:96] duration metric: took 4.495303282s to provisionDockerMachine
	I1019 12:52:49.061401  663517 start.go:293] postStartSetup for "embed-certs-123864" (driver="docker")
	I1019 12:52:49.061414  663517 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:52:49.061511  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:52:49.061564  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:47.787829  657553 pod_ready.go:94] pod "coredns-66bc5c9577-pgxlp" is "Ready"
	I1019 12:52:47.787855  657553 pod_ready.go:86] duration metric: took 31.504899877s for pod "coredns-66bc5c9577-pgxlp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.789711  657553 pod_ready.go:83] waiting for pod "etcd-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.793406  657553 pod_ready.go:94] pod "etcd-no-preload-561408" is "Ready"
	I1019 12:52:47.793446  657553 pod_ready.go:86] duration metric: took 3.709623ms for pod "etcd-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.795182  657553 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.798678  657553 pod_ready.go:94] pod "kube-apiserver-no-preload-561408" is "Ready"
	I1019 12:52:47.798700  657553 pod_ready.go:86] duration metric: took 3.496714ms for pod "kube-apiserver-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.800596  657553 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.986813  657553 pod_ready.go:94] pod "kube-controller-manager-no-preload-561408" is "Ready"
	I1019 12:52:47.986842  657553 pod_ready.go:86] duration metric: took 186.220802ms for pod "kube-controller-manager-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.186670  657553 pod_ready.go:83] waiting for pod "kube-proxy-lppwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.586865  657553 pod_ready.go:94] pod "kube-proxy-lppwp" is "Ready"
	I1019 12:52:48.586892  657553 pod_ready.go:86] duration metric: took 400.184165ms for pod "kube-proxy-lppwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.785758  657553 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.186913  657553 pod_ready.go:94] pod "kube-scheduler-no-preload-561408" is "Ready"
	I1019 12:52:49.186953  657553 pod_ready.go:86] duration metric: took 401.160394ms for pod "kube-scheduler-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.186968  657553 pod_ready.go:40] duration metric: took 32.907293647s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:49.233509  657553 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:52:49.235163  657553 out.go:179] * Done! kubectl is now configured to use "no-preload-561408" cluster and "default" namespace by default
	W1019 12:52:47.528927  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	I1019 12:52:48.027407  655442 pod_ready.go:94] pod "coredns-5dd5756b68-44mqv" is "Ready"
	I1019 12:52:48.027445  655442 pod_ready.go:86] duration metric: took 40.505181601s for pod "coredns-5dd5756b68-44mqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.030160  655442 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.034042  655442 pod_ready.go:94] pod "etcd-old-k8s-version-577062" is "Ready"
	I1019 12:52:48.034071  655442 pod_ready.go:86] duration metric: took 3.888307ms for pod "etcd-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.036741  655442 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.040245  655442 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-577062" is "Ready"
	I1019 12:52:48.040263  655442 pod_ready.go:86] duration metric: took 3.503128ms for pod "kube-apiserver-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.042393  655442 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.225329  655442 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-577062" is "Ready"
	I1019 12:52:48.225354  655442 pod_ready.go:86] duration metric: took 182.944102ms for pod "kube-controller-manager-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.426194  655442 pod_ready.go:83] waiting for pod "kube-proxy-lhths" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.826171  655442 pod_ready.go:94] pod "kube-proxy-lhths" is "Ready"
	I1019 12:52:48.826194  655442 pod_ready.go:86] duration metric: took 399.973598ms for pod "kube-proxy-lhths" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.025864  655442 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.425023  655442 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-577062" is "Ready"
	I1019 12:52:49.425051  655442 pod_ready.go:86] duration metric: took 399.16124ms for pod "kube-scheduler-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.425063  655442 pod_ready.go:40] duration metric: took 41.909017776s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:49.471302  655442 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1019 12:52:49.473153  655442 out.go:203] 
	W1019 12:52:49.474513  655442 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1019 12:52:49.475817  655442 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1019 12:52:49.477137  655442 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-577062" cluster and "default" namespace by default
	I1019 12:52:49.080598  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.176835  663517 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:52:49.180594  663517 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:52:49.180624  663517 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:52:49.180639  663517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:52:49.180704  663517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:52:49.180802  663517 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:52:49.180915  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:52:49.188874  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:49.207471  663517 start.go:296] duration metric: took 146.052119ms for postStartSetup
	I1019 12:52:49.207569  663517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:52:49.207618  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:49.227005  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.322539  663517 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:52:49.327981  663517 fix.go:56] duration metric: took 5.066251838s for fixHost
	I1019 12:52:49.328013  663517 start.go:83] releasing machines lock for "embed-certs-123864", held for 5.066315254s
	I1019 12:52:49.328080  663517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:52:49.348437  663517 ssh_runner.go:195] Run: cat /version.json
	I1019 12:52:49.348488  663517 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:52:49.348506  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:49.348561  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:49.368071  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.368417  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.525163  663517 ssh_runner.go:195] Run: systemctl --version
	I1019 12:52:49.534330  663517 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:52:49.578043  663517 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:52:49.583920  663517 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:52:49.583993  663517 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:52:49.593384  663517 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:52:49.593406  663517 start.go:495] detecting cgroup driver to use...
	I1019 12:52:49.593463  663517 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:52:49.593523  663517 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:52:49.612003  663517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:52:49.626574  663517 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:52:49.626639  663517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:52:49.641058  663517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:52:49.653880  663517 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:52:49.736282  663517 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:52:49.834377  663517 docker.go:234] disabling docker service ...
	I1019 12:52:49.834478  663517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:52:49.850898  663517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:52:49.864746  663517 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:52:49.939108  663517 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:52:50.014260  663517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:52:50.026706  663517 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:52:50.040656  663517 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:52:50.040725  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.049794  663517 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:52:50.049857  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.058814  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.067348  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.075837  663517 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:52:50.083843  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.092439  663517 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.100689  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.109083  663517 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:52:50.116037  663517 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:52:50.123017  663517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:50.196214  663517 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:52:50.304544  663517 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:52:50.304601  663517 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:52:50.308678  663517 start.go:563] Will wait 60s for crictl version
	I1019 12:52:50.308736  663517 ssh_runner.go:195] Run: which crictl
	I1019 12:52:50.312585  663517 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:52:50.336989  663517 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:52:50.337082  663517 ssh_runner.go:195] Run: crio --version
	I1019 12:52:50.365185  663517 ssh_runner.go:195] Run: crio --version
	I1019 12:52:50.395636  663517 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:52:50.396988  663517 cli_runner.go:164] Run: docker network inspect embed-certs-123864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:52:50.414563  663517 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 12:52:50.418760  663517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:50.429343  663517 kubeadm.go:883] updating cluster {Name:embed-certs-123864 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:52:50.429499  663517 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:50.429554  663517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:50.463514  663517 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:50.463537  663517 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:52:50.463585  663517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:50.489852  663517 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:50.489884  663517 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:52:50.489897  663517 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 12:52:50.490024  663517 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-123864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:52:50.490091  663517 ssh_runner.go:195] Run: crio config
	I1019 12:52:50.540351  663517 cni.go:84] Creating CNI manager for ""
	I1019 12:52:50.540379  663517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:50.540402  663517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:52:50.540455  663517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-123864 NodeName:embed-certs-123864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:52:50.540626  663517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-123864"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:52:50.540708  663517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:52:50.548975  663517 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:52:50.549037  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:52:50.556535  663517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1019 12:52:50.569078  663517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:52:50.582078  663517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1019 12:52:50.594598  663517 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:52:50.598683  663517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:50.609655  663517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:50.691984  663517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:50.714791  663517 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864 for IP: 192.168.76.2
	I1019 12:52:50.714813  663517 certs.go:195] generating shared ca certs ...
	I1019 12:52:50.714830  663517 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:50.714977  663517 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:52:50.715024  663517 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:52:50.715035  663517 certs.go:257] generating profile certs ...
	I1019 12:52:50.715113  663517 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/client.key
	I1019 12:52:50.715153  663517 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key.ef142c6b
	I1019 12:52:50.715189  663517 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.key
	I1019 12:52:50.715286  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:52:50.715311  663517 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:52:50.715320  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:52:50.715340  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:52:50.715362  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:52:50.715384  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:52:50.715443  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:50.716041  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:52:50.735271  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:52:50.755214  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:52:50.777014  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:52:50.800199  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 12:52:50.821324  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:52:50.839279  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:52:50.856965  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:52:50.874445  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:52:50.891496  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:52:50.908559  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:52:50.927767  663517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:52:50.941573  663517 ssh_runner.go:195] Run: openssl version
	I1019 12:52:50.947724  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:52:50.956196  663517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:52:50.959953  663517 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:52:50.960001  663517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:52:50.995897  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:52:51.005114  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:52:51.013652  663517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:51.017476  663517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:51.017521  663517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:51.051306  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:52:51.059843  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:52:51.068625  663517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:52:51.072364  663517 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:52:51.072434  663517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:52:51.106768  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:52:51.115327  663517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:52:51.119266  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:52:51.155239  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:52:51.191302  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:52:51.231935  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:52:51.281478  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:52:51.335604  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:52:51.389971  663517 kubeadm.go:400] StartCluster: {Name:embed-certs-123864 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:51.390086  663517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:52:51.390161  663517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:52:51.427193  663517 cri.go:89] found id: "0d6bd37e74ce4fd54de1cf8e27fcb93f0da4eae636f80ecf509c242bba0ab6b4"
	I1019 12:52:51.427217  663517 cri.go:89] found id: "2948778c0277b5d716b5581d32565f17755bd979469128c13d911b54b47927ea"
	I1019 12:52:51.427222  663517 cri.go:89] found id: "f0fd8fcb3c6d87abb5a73bdbe32675387cdf9b39fb23cc80e3f9fcee156b57fc"
	I1019 12:52:51.427225  663517 cri.go:89] found id: "ce30ef8a95f35deb3f080b7ea813df6a93693594ac7959d6e3a0b79159f36e25"
	I1019 12:52:51.427228  663517 cri.go:89] found id: ""
	I1019 12:52:51.427267  663517 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:52:51.440120  663517 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:52:51Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:52:51.440220  663517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:52:51.449733  663517 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:52:51.449753  663517 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:52:51.449805  663517 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:52:51.458169  663517 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:52:51.459058  663517 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-123864" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:51.459546  663517 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-351705/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-123864" cluster setting kubeconfig missing "embed-certs-123864" context setting]
	I1019 12:52:51.460311  663517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:51.462264  663517 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:52:51.470636  663517 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 12:52:51.470666  663517 kubeadm.go:601] duration metric: took 20.906449ms to restartPrimaryControlPlane
	I1019 12:52:51.470676  663517 kubeadm.go:402] duration metric: took 80.715661ms to StartCluster
	I1019 12:52:51.470710  663517 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:51.470784  663517 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:51.472656  663517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:51.472905  663517 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:52:51.473029  663517 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:52:51.473122  663517 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-123864"
	I1019 12:52:51.473142  663517 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-123864"
	W1019 12:52:51.473150  663517 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:52:51.473154  663517 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:51.473167  663517 addons.go:69] Setting dashboard=true in profile "embed-certs-123864"
	I1019 12:52:51.473186  663517 addons.go:238] Setting addon dashboard=true in "embed-certs-123864"
	I1019 12:52:51.473190  663517 host.go:66] Checking if "embed-certs-123864" exists ...
	W1019 12:52:51.473196  663517 addons.go:247] addon dashboard should already be in state true
	I1019 12:52:51.473194  663517 addons.go:69] Setting default-storageclass=true in profile "embed-certs-123864"
	I1019 12:52:51.473226  663517 host.go:66] Checking if "embed-certs-123864" exists ...
	I1019 12:52:51.473225  663517 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-123864"
	I1019 12:52:51.473582  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.473805  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.473960  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.476597  663517 out.go:179] * Verifying Kubernetes components...
	I1019 12:52:51.479247  663517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:51.500794  663517 addons.go:238] Setting addon default-storageclass=true in "embed-certs-123864"
	W1019 12:52:51.500880  663517 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:52:51.500970  663517 host.go:66] Checking if "embed-certs-123864" exists ...
	I1019 12:52:51.501574  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.502354  663517 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:52:51.503126  663517 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 12:52:51.503854  663517 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:51.503891  663517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:52:51.503970  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:51.505618  663517 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 12:52:47.131514  664256 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-999693" ...
	I1019 12:52:47.131575  664256 cli_runner.go:164] Run: docker start default-k8s-diff-port-999693
	I1019 12:52:47.384629  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:47.402936  664256 kic.go:430] container "default-k8s-diff-port-999693" state is running.
	I1019 12:52:47.403379  664256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-999693
	I1019 12:52:47.423463  664256 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/config.json ...
	I1019 12:52:47.423767  664256 machine.go:93] provisionDockerMachine start ...
	I1019 12:52:47.423874  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:47.444517  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:47.444842  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:47.444866  664256 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:52:47.445518  664256 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41262->127.0.0.1:33495: read: connection reset by peer
	I1019 12:52:50.583537  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-999693
	
	I1019 12:52:50.583567  664256 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-999693"
	I1019 12:52:50.583650  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:50.604186  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:50.604410  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:50.604444  664256 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-999693 && echo "default-k8s-diff-port-999693" | sudo tee /etc/hostname
	I1019 12:52:50.751627  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-999693
	
	I1019 12:52:50.751775  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:50.773964  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:50.774248  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:50.774277  664256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-999693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-999693/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-999693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:52:50.913745  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:52:50.913786  664256 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:52:50.913836  664256 ubuntu.go:190] setting up certificates
	I1019 12:52:50.913870  664256 provision.go:84] configureAuth start
	I1019 12:52:50.913952  664256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-999693
	I1019 12:52:50.934395  664256 provision.go:143] copyHostCerts
	I1019 12:52:50.934470  664256 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:52:50.934487  664256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:52:50.934554  664256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:52:50.934664  664256 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:52:50.934673  664256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:52:50.934711  664256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:52:50.934808  664256 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:52:50.934820  664256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:52:50.934849  664256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:52:50.934971  664256 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-999693 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-999693 localhost minikube]
	I1019 12:52:51.181197  664256 provision.go:177] copyRemoteCerts
	I1019 12:52:51.181259  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:52:51.181302  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.200908  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:51.299582  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:52:51.321298  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 12:52:51.347057  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:52:51.372503  664256 provision.go:87] duration metric: took 458.610195ms to configureAuth
	I1019 12:52:51.372536  664256 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:52:51.372758  664256 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:51.372944  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.397897  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:51.398221  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:51.398253  664256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:52:51.787740  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:52:51.787770  664256 machine.go:96] duration metric: took 4.36398321s to provisionDockerMachine
	I1019 12:52:51.787784  664256 start.go:293] postStartSetup for "default-k8s-diff-port-999693" (driver="docker")
	I1019 12:52:51.787799  664256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:52:51.787891  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:52:51.787950  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.813780  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:51.920668  664256 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:52:51.925324  664256 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:52:51.925357  664256 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:52:51.925370  664256 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:52:51.925448  664256 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:52:51.925552  664256 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:52:51.925688  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:52:51.936356  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:51.957175  664256 start.go:296] duration metric: took 169.373131ms for postStartSetup
	I1019 12:52:51.957258  664256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:52:51.957327  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.980799  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:52.081065  664256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:52:52.087117  664256 fix.go:56] duration metric: took 4.974857045s for fixHost
	I1019 12:52:52.087152  664256 start.go:83] releasing machines lock for "default-k8s-diff-port-999693", held for 4.974914543s
	I1019 12:52:52.087228  664256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-999693
	I1019 12:52:52.111457  664256 ssh_runner.go:195] Run: cat /version.json
	I1019 12:52:52.111517  664256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:52:52.111598  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:52.111518  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:52.137014  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:52.137025  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:52.314908  664256 ssh_runner.go:195] Run: systemctl --version
	I1019 12:52:52.323209  664256 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:52:52.366367  664256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:52:52.371765  664256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:52:52.371833  664256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:52:52.381186  664256 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:52:52.381210  664256 start.go:495] detecting cgroup driver to use...
	I1019 12:52:52.381243  664256 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:52:52.381290  664256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:52:52.399404  664256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:52:52.414594  664256 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:52:52.414655  664256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:52:52.432231  664256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:52:52.447748  664256 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:52:52.544771  664256 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:52:52.640880  664256 docker.go:234] disabling docker service ...
	I1019 12:52:52.640958  664256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:52:52.658680  664256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:52:52.672412  664256 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:52:52.769106  664256 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:52:52.884868  664256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:52:52.906499  664256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:52:52.933714  664256 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:52:52.933784  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.948702  664256 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:52:52.948841  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.962681  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.976376  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.993092  664256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:52:53.001841  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:53.017733  664256 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:53.032955  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:53.050801  664256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:52:53.067622  664256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:52:53.083829  664256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:53.206267  664256 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:52:53.349143  664256 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:52:53.349212  664256 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:52:53.355228  664256 start.go:563] Will wait 60s for crictl version
	I1019 12:52:53.355416  664256 ssh_runner.go:195] Run: which crictl
	I1019 12:52:53.361171  664256 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:52:53.398217  664256 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:52:53.398309  664256 ssh_runner.go:195] Run: crio --version
	I1019 12:52:53.428293  664256 ssh_runner.go:195] Run: crio --version
	I1019 12:52:53.468822  664256 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:52:51.507351  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 12:52:51.507377  663517 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:52:51.507478  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:51.528518  663517 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:51.528547  663517 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:51.528609  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:51.529319  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:51.537540  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:51.560844  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:51.652064  663517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:51.659469  663517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:51.665965  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:52:51.665989  663517 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:52:51.672138  663517 node_ready.go:35] waiting up to 6m0s for node "embed-certs-123864" to be "Ready" ...
	I1019 12:52:51.685068  663517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:51.686285  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:52:51.686312  663517 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:52:51.706556  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:52:51.706583  663517 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:52:51.726874  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:52:51.726898  663517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:52:51.745384  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:52:51.745410  663517 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:52:51.761707  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:52:51.761733  663517 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:52:51.779101  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:52:51.779128  663517 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:52:51.797377  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:52:51.797405  663517 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:52:51.812263  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:51.812286  663517 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:52:51.829889  663517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:53.072809  663517 node_ready.go:49] node "embed-certs-123864" is "Ready"
	I1019 12:52:53.072851  663517 node_ready.go:38] duration metric: took 1.400666832s for node "embed-certs-123864" to be "Ready" ...
	I1019 12:52:53.072871  663517 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:53.072920  663517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:53.700121  663517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.040605714s)
	I1019 12:52:53.700176  663517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.01507119s)
	I1019 12:52:53.700245  663517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.870328808s)
	I1019 12:52:53.700294  663517 api_server.go:72] duration metric: took 2.22734911s to wait for apiserver process to appear ...
	I1019 12:52:53.700347  663517 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:53.700370  663517 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:53.702124  663517 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-123864 addons enable metrics-server
	
	I1019 12:52:53.707464  663517 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:53.707492  663517 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:53.714665  663517 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 12:52:53.716036  663517 addons.go:514] duration metric: took 2.243010209s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 12:52:53.470131  664256 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-999693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:52:53.492572  664256 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 12:52:53.498533  664256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:53.511548  664256 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:52:53.511704  664256 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:53.511776  664256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:53.554672  664256 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:53.554693  664256 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:52:53.554740  664256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:53.588812  664256 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:53.588842  664256 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:52:53.588852  664256 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1019 12:52:53.588996  664256 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-999693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:52:53.589088  664256 ssh_runner.go:195] Run: crio config
	I1019 12:52:53.643663  664256 cni.go:84] Creating CNI manager for ""
	I1019 12:52:53.643692  664256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:53.643715  664256 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:52:53.643745  664256 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-999693 NodeName:default-k8s-diff-port-999693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:52:53.643935  664256 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-999693"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:52:53.644016  664256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:52:53.652520  664256 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:52:53.652594  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:52:53.660846  664256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1019 12:52:53.674227  664256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:52:53.687240  664256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1019 12:52:53.700930  664256 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:52:53.705067  664256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:53.717166  664256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:53.801260  664256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:53.825321  664256 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693 for IP: 192.168.85.2
	I1019 12:52:53.825347  664256 certs.go:195] generating shared ca certs ...
	I1019 12:52:53.825370  664256 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:53.825553  664256 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:52:53.825597  664256 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:52:53.825608  664256 certs.go:257] generating profile certs ...
	I1019 12:52:53.825725  664256 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/client.key
	I1019 12:52:53.825803  664256 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/apiserver.key.8ef1e1bb
	I1019 12:52:53.825855  664256 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/proxy-client.key
	I1019 12:52:53.826004  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:52:53.826045  664256 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:52:53.826057  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:52:53.826084  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:52:53.826120  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:52:53.826159  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:52:53.826218  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:53.827044  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:52:53.850305  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:52:53.874056  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:52:53.900302  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:52:53.924868  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1019 12:52:53.943707  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 12:52:53.960778  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:52:53.977601  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 12:52:53.994887  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:52:54.012296  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:52:54.038626  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:52:54.063497  664256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:52:54.079249  664256 ssh_runner.go:195] Run: openssl version
	I1019 12:52:54.086057  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:52:54.097143  664256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:54.102203  664256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:54.102259  664256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:54.158908  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:52:54.169449  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:52:54.182754  664256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:52:54.188730  664256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:52:54.188802  664256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:52:54.244383  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:52:54.254644  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:52:54.263550  664256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:52:54.267515  664256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:52:54.267578  664256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:52:54.304899  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:52:54.313985  664256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:52:54.317801  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:52:54.360081  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:52:54.405761  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:52:54.464318  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:52:54.525359  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:52:54.563734  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:52:54.608045  664256 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:54.608169  664256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:52:54.608231  664256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:52:54.649470  664256 cri.go:89] found id: "7387a9f9039b6043f8b791c29478a2e313a9c1d07804c55f3bd42e18a02230e4"
	I1019 12:52:54.649495  664256 cri.go:89] found id: "dc93d8bd2fb474180164b7ca4cdad0cbca1bb12056f2ec0109f0fdd3eaff8e74"
	I1019 12:52:54.649501  664256 cri.go:89] found id: "386f63ea17ece706be504558369a24b364237cf65e614304f2e3a200660b929a"
	I1019 12:52:54.649506  664256 cri.go:89] found id: "3d2737d35156d50ddf2521cf937a27d4a3882183759b5bedf15ae21799bc69b0"
	I1019 12:52:54.649511  664256 cri.go:89] found id: ""
	I1019 12:52:54.649557  664256 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:52:54.665837  664256 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:52:54Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:52:54.665908  664256 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:52:54.677684  664256 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:52:54.677708  664256 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:52:54.677757  664256 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:52:54.687556  664256 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:52:54.689468  664256 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-999693" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:54.690566  664256 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-351705/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-999693" cluster setting kubeconfig missing "default-k8s-diff-port-999693" context setting]
	I1019 12:52:54.691940  664256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:54.694639  664256 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:52:54.705918  664256 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 12:52:54.705949  664256 kubeadm.go:601] duration metric: took 28.235813ms to restartPrimaryControlPlane
	I1019 12:52:54.705960  664256 kubeadm.go:402] duration metric: took 97.926007ms to StartCluster
	I1019 12:52:54.705977  664256 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:54.706033  664256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:54.708821  664256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:54.709325  664256 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:52:54.709463  664256 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-999693"
	I1019 12:52:54.709490  664256 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-999693"
	W1019 12:52:54.709502  664256 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:52:54.709534  664256 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:52:54.709617  664256 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:54.709548  664256 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:52:54.709808  664256 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-999693"
	I1019 12:52:54.710141  664256 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-999693"
	W1019 12:52:54.710161  664256 addons.go:247] addon dashboard should already be in state true
	I1019 12:52:54.710191  664256 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:52:54.711868  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.712514  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.709821  664256 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-999693"
	I1019 12:52:54.713522  664256 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-999693"
	I1019 12:52:54.713860  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.714625  664256 out.go:179] * Verifying Kubernetes components...
	I1019 12:52:54.715871  664256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:54.746297  664256 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 12:52:54.747517  664256 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:52:54.747552  664256 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 12:52:54.749165  664256 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-999693"
	I1019 12:52:54.749177  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	W1019 12:52:54.749186  664256 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:52:54.749191  664256 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:52:54.749216  664256 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:52:54.749232  664256 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:54.749245  664256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:52:54.749256  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:54.749306  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:54.749711  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.783580  664256 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:54.783608  664256 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:54.783676  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:54.787579  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:54.788172  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:54.817481  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:54.916555  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:52:54.916589  664256 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:52:54.918652  664256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:54.921391  664256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:54.939730  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:52:54.939840  664256 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:52:54.940294  664256 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-999693" to be "Ready" ...
	I1019 12:52:54.941172  664256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:54.960699  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:52:54.960783  664256 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:52:54.976260  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:52:54.976341  664256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:52:54.996375  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:52:54.996401  664256 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:52:55.017050  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:52:55.017079  664256 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:52:55.033603  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:52:55.033632  664256 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:52:55.048007  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:52:55.048032  664256 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:52:55.063077  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:55.063102  664256 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:52:55.078449  664256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:56.495857  664256 node_ready.go:49] node "default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:56.495897  664256 node_ready.go:38] duration metric: took 1.555549648s for node "default-k8s-diff-port-999693" to be "Ready" ...
	I1019 12:52:56.495915  664256 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:56.495982  664256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:57.096998  664256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.175567368s)
	I1019 12:52:57.097030  664256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.155826931s)
	I1019 12:52:57.097189  664256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.018704195s)
	I1019 12:52:57.097307  664256 api_server.go:72] duration metric: took 2.387607096s to wait for apiserver process to appear ...
	I1019 12:52:57.097327  664256 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:57.097348  664256 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:57.100178  664256 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-999693 addons enable metrics-server
	
	I1019 12:52:57.102943  664256 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:57.102968  664256 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:57.105461  664256 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 12:52:54.200764  663517 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:54.206405  663517 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:54.206480  663517 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:54.701368  663517 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:54.709189  663517 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 12:52:54.710714  663517 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:54.710735  663517 api_server.go:131] duration metric: took 1.010380706s to wait for apiserver health ...
	I1019 12:52:54.710745  663517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:54.721732  663517 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:54.721787  663517 system_pods.go:61] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:54.721804  663517 system_pods.go:61] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:54.721814  663517 system_pods.go:61] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:54.721826  663517 system_pods.go:61] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:54.721838  663517 system_pods.go:61] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:54.721893  663517 system_pods.go:61] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:54.721905  663517 system_pods.go:61] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:54.721926  663517 system_pods.go:61] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:54.721934  663517 system_pods.go:74] duration metric: took 11.182501ms to wait for pod list to return data ...
	I1019 12:52:54.721949  663517 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:54.728320  663517 default_sa.go:45] found service account: "default"
	I1019 12:52:54.728404  663517 default_sa.go:55] duration metric: took 6.446433ms for default service account to be created ...
	I1019 12:52:54.728450  663517 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:54.742048  663517 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:54.742087  663517 system_pods.go:89] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:54.742747  663517 system_pods.go:89] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:54.743381  663517 system_pods.go:89] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:54.743410  663517 system_pods.go:89] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:54.743900  663517 system_pods.go:89] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:54.744078  663517 system_pods.go:89] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:54.744455  663517 system_pods.go:89] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:54.744805  663517 system_pods.go:89] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:54.744821  663517 system_pods.go:126] duration metric: took 16.360253ms to wait for k8s-apps to be running ...
	I1019 12:52:54.745172  663517 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:54.745631  663517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:54.769658  663517 system_svc.go:56] duration metric: took 24.811398ms WaitForService to wait for kubelet
	I1019 12:52:54.769727  663517 kubeadm.go:586] duration metric: took 3.296760449s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:54.769750  663517 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:54.773633  663517 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:54.773745  663517 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:54.773776  663517 node_conditions.go:105] duration metric: took 4.019851ms to run NodePressure ...
	I1019 12:52:54.773995  663517 start.go:241] waiting for startup goroutines ...
	I1019 12:52:54.774026  663517 start.go:246] waiting for cluster config update ...
	I1019 12:52:54.774043  663517 start.go:255] writing updated cluster config ...
	I1019 12:52:54.774837  663517 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:54.781544  663517 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:54.790057  663517 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:52:56.796654  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:52:57.109849  664256 addons.go:514] duration metric: took 2.400528693s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 12:52:57.598353  664256 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:57.604765  664256 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:57.604814  664256 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:58.098137  664256 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:58.103228  664256 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1019 12:52:58.104494  664256 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:58.104523  664256 api_server.go:131] duration metric: took 1.007188483s to wait for apiserver health ...
	I1019 12:52:58.104535  664256 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:58.108083  664256 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:58.108110  664256 system_pods.go:61] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:58.108118  664256 system_pods.go:61] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:58.108124  664256 system_pods.go:61] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:58.108130  664256 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:58.108142  664256 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:58.108150  664256 system_pods.go:61] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:58.108159  664256 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:58.108168  664256 system_pods.go:61] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Running
	I1019 12:52:58.108179  664256 system_pods.go:74] duration metric: took 3.637436ms to wait for pod list to return data ...
	I1019 12:52:58.108192  664256 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:58.110578  664256 default_sa.go:45] found service account: "default"
	I1019 12:52:58.110596  664256 default_sa.go:55] duration metric: took 2.39546ms for default service account to be created ...
	I1019 12:52:58.110604  664256 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:58.113444  664256 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:58.113473  664256 system_pods.go:89] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:58.113485  664256 system_pods.go:89] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:58.113496  664256 system_pods.go:89] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:58.113516  664256 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:58.113527  664256 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:58.113534  664256 system_pods.go:89] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:58.113539  664256 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:58.113545  664256 system_pods.go:89] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Running
	I1019 12:52:58.113553  664256 system_pods.go:126] duration metric: took 2.943742ms to wait for k8s-apps to be running ...
	I1019 12:52:58.113563  664256 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:58.113613  664256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:58.128579  664256 system_svc.go:56] duration metric: took 15.004824ms WaitForService to wait for kubelet
	I1019 12:52:58.128609  664256 kubeadm.go:586] duration metric: took 3.418911937s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:58.128632  664256 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:58.131784  664256 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:58.131819  664256 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:58.131832  664256 node_conditions.go:105] duration metric: took 3.194851ms to run NodePressure ...
	I1019 12:52:58.131843  664256 start.go:241] waiting for startup goroutines ...
	I1019 12:52:58.131850  664256 start.go:246] waiting for cluster config update ...
	I1019 12:52:58.131862  664256 start.go:255] writing updated cluster config ...
	I1019 12:52:58.132300  664256 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:58.136574  664256 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:58.140912  664256 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:53:00.147567  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:52:59.295731  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:01.298842  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:03.300380  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 19 12:52:26 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:26.295890111Z" level=info msg="Started container" PID=1726 containerID=b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb/dashboard-metrics-scraper id=9b6488e0-33d1-4a21-b97e-d8fa282eb3da name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ce503128b1c053d63a2dc142585ed9cf38b2b6920892ae9ea67fad6fc68278b
	Oct 19 12:52:27 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:27.249019147Z" level=info msg="Removing container: b57cfe227ad0bcc297fb550d2ba0c9dab9af664d38a4b99b249229e327067f7c" id=d562f866-c216-48f1-a20c-772955422dba name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:52:27 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:27.26055677Z" level=info msg="Removed container b57cfe227ad0bcc297fb550d2ba0c9dab9af664d38a4b99b249229e327067f7c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb/dashboard-metrics-scraper" id=d562f866-c216-48f1-a20c-772955422dba name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.276789662Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2353e3d3-7b63-4bb0-9bbd-57866ce14963 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.277721855Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5eb1c1d4-329e-4006-90d5-86b31b4983f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.278666589Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=1a25572f-a472-43e4-9fc0-e97e46ce0b2f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.278955677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.283211829Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.283385638Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e8bd0b09ce8a2a1292e6982a1d9402a90c9f199b83fb96412238ff3cf520766a/merged/etc/passwd: no such file or directory"
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.283455532Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e8bd0b09ce8a2a1292e6982a1d9402a90c9f199b83fb96412238ff3cf520766a/merged/etc/group: no such file or directory"
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.283775619Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.309915989Z" level=info msg="Created container b22cedaa72f076deb56a9e65cbf65d4fedd7743c72f9de44745670d3da78cd44: kube-system/storage-provisioner/storage-provisioner" id=1a25572f-a472-43e4-9fc0-e97e46ce0b2f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.310733427Z" level=info msg="Starting container: b22cedaa72f076deb56a9e65cbf65d4fedd7743c72f9de44745670d3da78cd44" id=86eea788-22fd-4228-8d21-92fd1a55a22c name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:52:38 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:38.312743155Z" level=info msg="Started container" PID=1740 containerID=b22cedaa72f076deb56a9e65cbf65d4fedd7743c72f9de44745670d3da78cd44 description=kube-system/storage-provisioner/storage-provisioner id=86eea788-22fd-4228-8d21-92fd1a55a22c name=/runtime.v1.RuntimeService/StartContainer sandboxID=4658e3fcca3594b584c6308ecbc62da5028f9fe2979e8db9d54cfc50cfdb93ff
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.165881192Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dae67251-d373-470e-a6c7-de56d3eecb1a name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.166866933Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=67289d87-af30-488e-bd86-4f6cd8f87950 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.167808757Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb/dashboard-metrics-scraper" id=32830c0b-0813-4fa8-a9d4-18ebbce16606 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.168033954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.173751882Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.174395924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.201953355Z" level=info msg="Created container 29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb/dashboard-metrics-scraper" id=32830c0b-0813-4fa8-a9d4-18ebbce16606 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.202609162Z" level=info msg="Starting container: 29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b" id=50e0fcbd-36ea-4a57-9ca6-b6c117447b52 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.204320014Z" level=info msg="Started container" PID=1755 containerID=29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb/dashboard-metrics-scraper id=50e0fcbd-36ea-4a57-9ca6-b6c117447b52 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ce503128b1c053d63a2dc142585ed9cf38b2b6920892ae9ea67fad6fc68278b
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.298389433Z" level=info msg="Removing container: b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021" id=adf2be1a-dc7b-485a-9133-0051d73fce00 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:52:45 old-k8s-version-577062 crio[558]: time="2025-10-19T12:52:45.307844523Z" level=info msg="Removed container b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb/dashboard-metrics-scraper" id=adf2be1a-dc7b-485a-9133-0051d73fce00 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	29b71e817f4ea       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   2ce503128b1c0       dashboard-metrics-scraper-5f989dc9cf-kx2tb       kubernetes-dashboard
	b22cedaa72f07       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   4658e3fcca359       storage-provisioner                              kube-system
	141891d9bcecd       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago       Running             kubernetes-dashboard        0                   9eef0afbabf70       kubernetes-dashboard-8694d4445c-4xrjh            kubernetes-dashboard
	831f176d66e63       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           58 seconds ago       Running             coredns                     0                   46899d6103082       coredns-5dd5756b68-44mqv                         kube-system
	1644ce12959f7       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   1278fa0581229       busybox                                          default
	bca9cb8e7e1a4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   f25a220960201       kindnet-2h26b                                    kube-system
	e9c3dda964119       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           58 seconds ago       Running             kube-proxy                  0                   d48b279492427       kube-proxy-lhths                                 kube-system
	a9a54186737cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   4658e3fcca359       storage-provisioner                              kube-system
	ba25f6a999b0c       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   5e6b6fc78f636       kube-apiserver-old-k8s-version-577062            kube-system
	fbf4c9d76e1db       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   e3ff6ccb73e03       kube-controller-manager-old-k8s-version-577062   kube-system
	8577c744298fa       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   648f572919b9d       kube-scheduler-old-k8s-version-577062            kube-system
	2c9fe6c9b1b32       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   b2633e090834e       etcd-old-k8s-version-577062                      kube-system
	
	
	==> coredns [831f176d66e63a51f4bc180ce401d4ecda5e783f443e4ffd91216fd1999c8eef] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52856 - 28719 "HINFO IN 8134314610029088256.8191675844325686558. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.085771502s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-577062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-577062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=old-k8s-version-577062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_50_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:50:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-577062
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:52:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:52:36 +0000   Sun, 19 Oct 2025 12:50:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:52:36 +0000   Sun, 19 Oct 2025 12:50:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:52:36 +0000   Sun, 19 Oct 2025 12:50:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:52:36 +0000   Sun, 19 Oct 2025 12:51:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-577062
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                bfa1b0a1-e61a-4552-82c8-d6cc29922f2a
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-5dd5756b68-44mqv                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     115s
	  kube-system                 etcd-old-k8s-version-577062                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m8s
	  kube-system                 kindnet-2h26b                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-old-k8s-version-577062             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-old-k8s-version-577062    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-proxy-lhths                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-old-k8s-version-577062             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-kx2tb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4xrjh             0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 113s               kube-proxy       
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m8s               kubelet          Node old-k8s-version-577062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s               kubelet          Node old-k8s-version-577062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s               kubelet          Node old-k8s-version-577062 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m8s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           117s               node-controller  Node old-k8s-version-577062 event: Registered Node old-k8s-version-577062 in Controller
	  Normal  NodeReady                101s               kubelet          Node old-k8s-version-577062 status is now: NodeReady
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node old-k8s-version-577062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node old-k8s-version-577062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node old-k8s-version-577062 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node old-k8s-version-577062 event: Registered Node old-k8s-version-577062 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [2c9fe6c9b1b32926f91a1bde357e191e5e1e3b8139fa61a8202db438bcecf6d3] <==
	{"level":"info","ts":"2025-10-19T12:52:03.725582Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T12:52:03.725594Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-19T12:52:03.725875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-10-19T12:52:03.725952Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-10-19T12:52:03.726056Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T12:52:03.72609Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T12:52:03.728072Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-19T12:52:03.72835Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T12:52:03.728416Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T12:52:03.729618Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-19T12:52:03.729684Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-10-19T12:52:05.016492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-19T12:52:05.016545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-19T12:52:05.016581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-10-19T12:52:05.016592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-10-19T12:52:05.016598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-19T12:52:05.016606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-10-19T12:52:05.016613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-10-19T12:52:05.018407Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-577062 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T12:52:05.018447Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T12:52:05.018416Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T12:52:05.018637Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T12:52:05.018669Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-19T12:52:05.01949Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-10-19T12:52:05.019687Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:53:06 up  2:35,  0 user,  load average: 4.86, 4.85, 3.12
	Linux old-k8s-version-577062 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bca9cb8e7e1a4789fce59ad4a5788c1e7058d9f9e7ec1057f342040b015717bc] <==
	I1019 12:52:07.727238       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:52:07.727524       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1019 12:52:07.727697       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:52:07.727717       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:52:07.727742       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:52:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:52:07.926955       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:52:07.926986       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:52:07.927001       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:52:08.124622       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:52:08.227779       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:52:08.227800       1 metrics.go:72] Registering metrics
	I1019 12:52:08.227860       1 controller.go:711] "Syncing nftables rules"
	I1019 12:52:17.927505       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 12:52:17.927589       1 main.go:301] handling current node
	I1019 12:52:27.927229       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 12:52:27.927257       1 main.go:301] handling current node
	I1019 12:52:37.927588       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 12:52:37.927643       1 main.go:301] handling current node
	I1019 12:52:47.927651       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 12:52:47.927692       1 main.go:301] handling current node
	I1019 12:52:57.934500       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1019 12:52:57.934545       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ba25f6a999b0c5ae02f451d523de313de12a4d3d20296a8becbbee6fa1a54b92] <==
	I1019 12:52:06.323706       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1019 12:52:06.331726       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1019 12:52:06.339986       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:52:06.376101       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1019 12:52:06.376141       1 aggregator.go:166] initial CRD sync complete...
	I1019 12:52:06.376156       1 autoregister_controller.go:141] Starting autoregister controller
	I1019 12:52:06.376165       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 12:52:06.376176       1 cache.go:39] Caches are synced for autoregister controller
	I1019 12:52:06.421879       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1019 12:52:06.422004       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1019 12:52:06.421894       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1019 12:52:06.422256       1 shared_informer.go:318] Caches are synced for configmaps
	I1019 12:52:06.423872       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 12:52:07.227784       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:52:07.340024       1 controller.go:624] quota admission added evaluator for: namespaces
	I1019 12:52:07.370988       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1019 12:52:07.389012       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:52:07.397441       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:52:07.404892       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1019 12:52:07.439392       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.75.19"}
	I1019 12:52:07.456952       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.12.204"}
	I1019 12:52:18.734320       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:52:18.806743       1 controller.go:624] quota admission added evaluator for: endpoints
	I1019 12:52:18.806744       1 controller.go:624] quota admission added evaluator for: endpoints
	I1019 12:52:18.983184       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fbf4c9d76e1dbee5411f82439799eddfa94579d729009e817ab32efa62aa037b] <==
	I1019 12:52:18.987514       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1019 12:52:18.987618       1 shared_informer.go:318] Caches are synced for resource quota
	I1019 12:52:18.988955       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1019 12:52:18.997992       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-4xrjh"
	I1019 12:52:18.998855       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-kx2tb"
	I1019 12:52:19.006704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.315897ms"
	I1019 12:52:19.007018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.794631ms"
	I1019 12:52:19.012596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.769031ms"
	I1019 12:52:19.012682       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.512µs"
	I1019 12:52:19.014691       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.130509ms"
	I1019 12:52:19.025893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="35.766µs"
	I1019 12:52:19.026264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="11.490846ms"
	I1019 12:52:19.026344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.245µs"
	I1019 12:52:19.303886       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 12:52:19.331958       1 shared_informer.go:318] Caches are synced for garbage collector
	I1019 12:52:19.331994       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1019 12:52:23.267968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.552936ms"
	I1019 12:52:23.268592       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="101.374µs"
	I1019 12:52:26.254211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="126.751µs"
	I1019 12:52:27.259886       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.096µs"
	I1019 12:52:28.260403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.37µs"
	I1019 12:52:45.308669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.894µs"
	I1019 12:52:47.575941       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.251184ms"
	I1019 12:52:47.576261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.431µs"
	I1019 12:52:49.317385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.947µs"
	
	
	==> kube-proxy [e9c3dda964119fe6efea193da287473cefe468088e2bca9f9cf19321e2a8bfeb] <==
	I1019 12:52:07.580491       1 server_others.go:69] "Using iptables proxy"
	I1019 12:52:07.588813       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1019 12:52:07.605913       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:52:07.608334       1 server_others.go:152] "Using iptables Proxier"
	I1019 12:52:07.608361       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1019 12:52:07.608366       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1019 12:52:07.608393       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1019 12:52:07.608737       1 server.go:846] "Version info" version="v1.28.0"
	I1019 12:52:07.608755       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:07.609766       1 config.go:315] "Starting node config controller"
	I1019 12:52:07.609803       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1019 12:52:07.609816       1 config.go:188] "Starting service config controller"
	I1019 12:52:07.609849       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1019 12:52:07.609361       1 config.go:97] "Starting endpoint slice config controller"
	I1019 12:52:07.610040       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1019 12:52:07.709977       1 shared_informer.go:318] Caches are synced for node config
	I1019 12:52:07.710075       1 shared_informer.go:318] Caches are synced for service config
	I1019 12:52:07.710399       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8577c744298fa841bb6cdfc8e4e7b5ca9854b6075ef4d4ee96ca794f243de677] <==
	W1019 12:52:06.327806       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.327831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.329043       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.329072       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.329443       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1019 12:52:06.330926       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W1019 12:52:06.330058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.330983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.330273       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E1019 12:52:06.331003       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W1019 12:52:06.330444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.331022       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.330580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.331044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.331352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.331406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.332785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W1019 12:52:06.332974       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.333145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W1019 12:52:06.332919       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.333271       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E1019 12:52:06.333217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W1019 12:52:06.336658       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E1019 12:52:06.336690       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	I1019 12:52:06.418647       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 12:52:19 old-k8s-version-577062 kubelet[715]: I1019 12:52:19.059784     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/319a68f4-f2f5-4163-af82-7420a9bd1a41-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-4xrjh\" (UID: \"319a68f4-f2f5-4163-af82-7420a9bd1a41\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4xrjh"
	Oct 19 12:52:19 old-k8s-version-577062 kubelet[715]: I1019 12:52:19.059818     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5hqb\" (UniqueName: \"kubernetes.io/projected/a2b1a6c1-1690-476d-972a-fac12a8b3d1f-kube-api-access-l5hqb\") pod \"dashboard-metrics-scraper-5f989dc9cf-kx2tb\" (UID: \"a2b1a6c1-1690-476d-972a-fac12a8b3d1f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb"
	Oct 19 12:52:19 old-k8s-version-577062 kubelet[715]: I1019 12:52:19.059950     715 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a2b1a6c1-1690-476d-972a-fac12a8b3d1f-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-kx2tb\" (UID: \"a2b1a6c1-1690-476d-972a-fac12a8b3d1f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb"
	Oct 19 12:52:23 old-k8s-version-577062 kubelet[715]: I1019 12:52:23.253438     715 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4xrjh" podStartSLOduration=1.512541827 podCreationTimestamp="2025-10-19 12:52:18 +0000 UTC" firstStartedPulling="2025-10-19 12:52:19.328800685 +0000 UTC m=+16.257529503" lastFinishedPulling="2025-10-19 12:52:23.069597706 +0000 UTC m=+19.998326536" observedRunningTime="2025-10-19 12:52:23.253252401 +0000 UTC m=+20.181981240" watchObservedRunningTime="2025-10-19 12:52:23.25333886 +0000 UTC m=+20.182067697"
	Oct 19 12:52:26 old-k8s-version-577062 kubelet[715]: I1019 12:52:26.243156     715 scope.go:117] "RemoveContainer" containerID="b57cfe227ad0bcc297fb550d2ba0c9dab9af664d38a4b99b249229e327067f7c"
	Oct 19 12:52:27 old-k8s-version-577062 kubelet[715]: I1019 12:52:27.247609     715 scope.go:117] "RemoveContainer" containerID="b57cfe227ad0bcc297fb550d2ba0c9dab9af664d38a4b99b249229e327067f7c"
	Oct 19 12:52:27 old-k8s-version-577062 kubelet[715]: I1019 12:52:27.247803     715 scope.go:117] "RemoveContainer" containerID="b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021"
	Oct 19 12:52:27 old-k8s-version-577062 kubelet[715]: E1019 12:52:27.248186     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kx2tb_kubernetes-dashboard(a2b1a6c1-1690-476d-972a-fac12a8b3d1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb" podUID="a2b1a6c1-1690-476d-972a-fac12a8b3d1f"
	Oct 19 12:52:28 old-k8s-version-577062 kubelet[715]: I1019 12:52:28.251105     715 scope.go:117] "RemoveContainer" containerID="b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021"
	Oct 19 12:52:28 old-k8s-version-577062 kubelet[715]: E1019 12:52:28.251384     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kx2tb_kubernetes-dashboard(a2b1a6c1-1690-476d-972a-fac12a8b3d1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb" podUID="a2b1a6c1-1690-476d-972a-fac12a8b3d1f"
	Oct 19 12:52:29 old-k8s-version-577062 kubelet[715]: I1019 12:52:29.305992     715 scope.go:117] "RemoveContainer" containerID="b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021"
	Oct 19 12:52:29 old-k8s-version-577062 kubelet[715]: E1019 12:52:29.306384     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kx2tb_kubernetes-dashboard(a2b1a6c1-1690-476d-972a-fac12a8b3d1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb" podUID="a2b1a6c1-1690-476d-972a-fac12a8b3d1f"
	Oct 19 12:52:38 old-k8s-version-577062 kubelet[715]: I1019 12:52:38.276273     715 scope.go:117] "RemoveContainer" containerID="a9a54186737cc9a1243f50a29cf83a48c7326a7fa8a8c9b9f0a830c882f6d33f"
	Oct 19 12:52:45 old-k8s-version-577062 kubelet[715]: I1019 12:52:45.165257     715 scope.go:117] "RemoveContainer" containerID="b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021"
	Oct 19 12:52:45 old-k8s-version-577062 kubelet[715]: I1019 12:52:45.297171     715 scope.go:117] "RemoveContainer" containerID="b122f736733a397695439942bee987805409ccbb4d09124671e703469fd43021"
	Oct 19 12:52:45 old-k8s-version-577062 kubelet[715]: I1019 12:52:45.297511     715 scope.go:117] "RemoveContainer" containerID="29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b"
	Oct 19 12:52:45 old-k8s-version-577062 kubelet[715]: E1019 12:52:45.297863     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kx2tb_kubernetes-dashboard(a2b1a6c1-1690-476d-972a-fac12a8b3d1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb" podUID="a2b1a6c1-1690-476d-972a-fac12a8b3d1f"
	Oct 19 12:52:49 old-k8s-version-577062 kubelet[715]: I1019 12:52:49.306242     715 scope.go:117] "RemoveContainer" containerID="29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b"
	Oct 19 12:52:49 old-k8s-version-577062 kubelet[715]: E1019 12:52:49.306639     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kx2tb_kubernetes-dashboard(a2b1a6c1-1690-476d-972a-fac12a8b3d1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb" podUID="a2b1a6c1-1690-476d-972a-fac12a8b3d1f"
	Oct 19 12:53:00 old-k8s-version-577062 kubelet[715]: I1019 12:53:00.165105     715 scope.go:117] "RemoveContainer" containerID="29b71e817f4eaab5850a38256c65f4e185e62c4a370d0b50d490bbb95e1d7c5b"
	Oct 19 12:53:00 old-k8s-version-577062 kubelet[715]: E1019 12:53:00.165534     715 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-kx2tb_kubernetes-dashboard(a2b1a6c1-1690-476d-972a-fac12a8b3d1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-kx2tb" podUID="a2b1a6c1-1690-476d-972a-fac12a8b3d1f"
	Oct 19 12:53:00 old-k8s-version-577062 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 12:53:00 old-k8s-version-577062 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 12:53:00 old-k8s-version-577062 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 12:53:00 old-k8s-version-577062 systemd[1]: kubelet.service: Consumed 1.612s CPU time.
	
	
	==> kubernetes-dashboard [141891d9bcecd7b8f29e6a840f8c01c263be938405ca6b55629648a298625543] <==
	2025/10/19 12:52:23 Using namespace: kubernetes-dashboard
	2025/10/19 12:52:23 Using in-cluster config to connect to apiserver
	2025/10/19 12:52:23 Using secret token for csrf signing
	2025/10/19 12:52:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 12:52:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 12:52:23 Successful initial request to the apiserver, version: v1.28.0
	2025/10/19 12:52:23 Generating JWE encryption key
	2025/10/19 12:52:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 12:52:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 12:52:23 Initializing JWE encryption key from synchronized object
	2025/10/19 12:52:23 Creating in-cluster Sidecar client
	2025/10/19 12:52:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 12:52:23 Serving insecurely on HTTP port: 9090
	2025/10/19 12:52:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 12:52:23 Starting overwatch
	
	
	==> storage-provisioner [a9a54186737cc9a1243f50a29cf83a48c7326a7fa8a8c9b9f0a830c882f6d33f] <==
	I1019 12:52:07.544833       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 12:52:37.549447       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b22cedaa72f076deb56a9e65cbf65d4fedd7743c72f9de44745670d3da78cd44] <==
	I1019 12:52:38.324150       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:52:38.331599       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:52:38.331634       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1019 12:52:55.729527       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:52:55.729596       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8829309e-ce84-4b37-8b7e-53ec540533f6", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-577062_abc5fd64-6d0b-45bb-8a5a-6904b511212b became leader
	I1019 12:52:55.729678       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-577062_abc5fd64-6d0b-45bb-8a5a-6904b511212b!
	I1019 12:52:55.830929       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-577062_abc5fd64-6d0b-45bb-8a5a-6904b511212b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-577062 -n old-k8s-version-577062
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-577062 -n old-k8s-version-577062: exit status 2 (390.424323ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-577062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-561408 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-561408 --alsologtostderr -v=1: exit status 80 (2.13385948s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-561408 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:53:01.028357  668483 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:53:01.028562  668483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:01.028574  668483 out.go:374] Setting ErrFile to fd 2...
	I1019 12:53:01.028581  668483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:01.028903  668483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:53:01.029235  668483 out.go:368] Setting JSON to false
	I1019 12:53:01.029292  668483 mustload.go:65] Loading cluster: no-preload-561408
	I1019 12:53:01.029802  668483 config.go:182] Loaded profile config "no-preload-561408": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:01.030401  668483 cli_runner.go:164] Run: docker container inspect no-preload-561408 --format={{.State.Status}}
	I1019 12:53:01.051158  668483 host.go:66] Checking if "no-preload-561408" exists ...
	I1019 12:53:01.051457  668483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:01.141308  668483 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-19 12:53:01.124219606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:01.142276  668483 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-561408 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 12:53:01.144679  668483 out.go:179] * Pausing node no-preload-561408 ... 
	I1019 12:53:01.145824  668483 host.go:66] Checking if "no-preload-561408" exists ...
	I1019 12:53:01.146146  668483 ssh_runner.go:195] Run: systemctl --version
	I1019 12:53:01.146191  668483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-561408
	I1019 12:53:01.173041  668483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/no-preload-561408/id_rsa Username:docker}
	I1019 12:53:01.285118  668483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:01.303602  668483 pause.go:52] kubelet running: true
	I1019 12:53:01.303687  668483 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:01.572580  668483 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:01.572734  668483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:01.677350  668483 cri.go:89] found id: "ea70d04b3723054d0048f663e93576611305094165f6e15c68c81dddbc07caf0"
	I1019 12:53:01.677377  668483 cri.go:89] found id: "2f726a5e2a456524d90c9f4cabeb7cf0ba8039f3ba6d55bd262c7f75669065fb"
	I1019 12:53:01.677383  668483 cri.go:89] found id: "e4ca43f4f6043f242e54cacc117ecafdddba7c52f5e782eaac1f1a294095d562"
	I1019 12:53:01.677387  668483 cri.go:89] found id: "020c85d371fff781f4756c6e8c355ddb7bd7f5a0962e17c03bbb71f5670fd818"
	I1019 12:53:01.677391  668483 cri.go:89] found id: "063e2ede2fb5d7efd8c012dc8a326dea1655039e3c63f156dbcc015d3aa6d400"
	I1019 12:53:01.677396  668483 cri.go:89] found id: "6c259b4325350a6198e9a1d8d0eac556ea213104568525890a93d7a828893ce4"
	I1019 12:53:01.677400  668483 cri.go:89] found id: "f7b8547c0e92276ea4aa3de0d1355f2d469801e321a4bd5e24851ac65d15e3d7"
	I1019 12:53:01.677456  668483 cri.go:89] found id: "9090a5b4e67c95d31bf16d2ca089106db1a0761e43d712e00a8bf33bc963353d"
	I1019 12:53:01.677461  668483 cri.go:89] found id: "01ed9d93f2579a1ea122d6b57e30a1236b2a3f66e97860cfecc6148cae01a115"
	I1019 12:53:01.677469  668483 cri.go:89] found id: "df77f4d327ae80f60bf8d9478cc89af7ea33c43e5e8c28c0916303da469e7af3"
	I1019 12:53:01.677477  668483 cri.go:89] found id: "5799985fefa34297176d719d0444775a1e3245e7e4e852cb78f47add03751360"
	I1019 12:53:01.677512  668483 cri.go:89] found id: ""
	I1019 12:53:01.677588  668483 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:01.695833  668483 retry.go:31] will retry after 216.008972ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:01Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:53:01.912410  668483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:01.932149  668483 pause.go:52] kubelet running: false
	I1019 12:53:01.932229  668483 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:02.156493  668483 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:02.156613  668483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:02.251513  668483 cri.go:89] found id: "ea70d04b3723054d0048f663e93576611305094165f6e15c68c81dddbc07caf0"
	I1019 12:53:02.251538  668483 cri.go:89] found id: "2f726a5e2a456524d90c9f4cabeb7cf0ba8039f3ba6d55bd262c7f75669065fb"
	I1019 12:53:02.251543  668483 cri.go:89] found id: "e4ca43f4f6043f242e54cacc117ecafdddba7c52f5e782eaac1f1a294095d562"
	I1019 12:53:02.251547  668483 cri.go:89] found id: "020c85d371fff781f4756c6e8c355ddb7bd7f5a0962e17c03bbb71f5670fd818"
	I1019 12:53:02.251552  668483 cri.go:89] found id: "063e2ede2fb5d7efd8c012dc8a326dea1655039e3c63f156dbcc015d3aa6d400"
	I1019 12:53:02.251556  668483 cri.go:89] found id: "6c259b4325350a6198e9a1d8d0eac556ea213104568525890a93d7a828893ce4"
	I1019 12:53:02.251560  668483 cri.go:89] found id: "f7b8547c0e92276ea4aa3de0d1355f2d469801e321a4bd5e24851ac65d15e3d7"
	I1019 12:53:02.251564  668483 cri.go:89] found id: "9090a5b4e67c95d31bf16d2ca089106db1a0761e43d712e00a8bf33bc963353d"
	I1019 12:53:02.251568  668483 cri.go:89] found id: "01ed9d93f2579a1ea122d6b57e30a1236b2a3f66e97860cfecc6148cae01a115"
	I1019 12:53:02.251576  668483 cri.go:89] found id: "df77f4d327ae80f60bf8d9478cc89af7ea33c43e5e8c28c0916303da469e7af3"
	I1019 12:53:02.251580  668483 cri.go:89] found id: "5799985fefa34297176d719d0444775a1e3245e7e4e852cb78f47add03751360"
	I1019 12:53:02.251584  668483 cri.go:89] found id: ""
	I1019 12:53:02.251647  668483 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:02.271237  668483 retry.go:31] will retry after 264.553848ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:02Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:53:02.537099  668483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:02.555324  668483 pause.go:52] kubelet running: false
	I1019 12:53:02.555399  668483 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:02.782450  668483 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:02.782532  668483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:02.880668  668483 cri.go:89] found id: "ea70d04b3723054d0048f663e93576611305094165f6e15c68c81dddbc07caf0"
	I1019 12:53:02.880700  668483 cri.go:89] found id: "2f726a5e2a456524d90c9f4cabeb7cf0ba8039f3ba6d55bd262c7f75669065fb"
	I1019 12:53:02.880705  668483 cri.go:89] found id: "e4ca43f4f6043f242e54cacc117ecafdddba7c52f5e782eaac1f1a294095d562"
	I1019 12:53:02.880711  668483 cri.go:89] found id: "020c85d371fff781f4756c6e8c355ddb7bd7f5a0962e17c03bbb71f5670fd818"
	I1019 12:53:02.880716  668483 cri.go:89] found id: "063e2ede2fb5d7efd8c012dc8a326dea1655039e3c63f156dbcc015d3aa6d400"
	I1019 12:53:02.880720  668483 cri.go:89] found id: "6c259b4325350a6198e9a1d8d0eac556ea213104568525890a93d7a828893ce4"
	I1019 12:53:02.880724  668483 cri.go:89] found id: "f7b8547c0e92276ea4aa3de0d1355f2d469801e321a4bd5e24851ac65d15e3d7"
	I1019 12:53:02.880728  668483 cri.go:89] found id: "9090a5b4e67c95d31bf16d2ca089106db1a0761e43d712e00a8bf33bc963353d"
	I1019 12:53:02.880733  668483 cri.go:89] found id: "01ed9d93f2579a1ea122d6b57e30a1236b2a3f66e97860cfecc6148cae01a115"
	I1019 12:53:02.880741  668483 cri.go:89] found id: "df77f4d327ae80f60bf8d9478cc89af7ea33c43e5e8c28c0916303da469e7af3"
	I1019 12:53:02.880746  668483 cri.go:89] found id: "5799985fefa34297176d719d0444775a1e3245e7e4e852cb78f47add03751360"
	I1019 12:53:02.880749  668483 cri.go:89] found id: ""
	I1019 12:53:02.880795  668483 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:03.002758  668483 out.go:203] 
	W1019 12:53:03.049007  668483 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:53:03.049032  668483 out.go:285] * 
	* 
	W1019 12:53:03.056853  668483 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_7.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_7.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:53:03.096793  668483 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-561408 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-561408
helpers_test.go:243: (dbg) docker inspect no-preload-561408:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583",
	        "Created": "2025-10-19T12:50:45.391801747Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 657809,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:52:06.431171295Z",
	            "FinishedAt": "2025-10-19T12:52:05.317296149Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583/hostname",
	        "HostsPath": "/var/lib/docker/containers/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583/hosts",
	        "LogPath": "/var/lib/docker/containers/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583-json.log",
	        "Name": "/no-preload-561408",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-561408:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-561408",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583",
	                "LowerDir": "/var/lib/docker/overlay2/6288165495fe743f3168f10ebe2b1785cd769498c22f951727a4dfaac7696c1b-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6288165495fe743f3168f10ebe2b1785cd769498c22f951727a4dfaac7696c1b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6288165495fe743f3168f10ebe2b1785cd769498c22f951727a4dfaac7696c1b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6288165495fe743f3168f10ebe2b1785cd769498c22f951727a4dfaac7696c1b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-561408",
	                "Source": "/var/lib/docker/volumes/no-preload-561408/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-561408",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-561408",
	                "name.minikube.sigs.k8s.io": "no-preload-561408",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d663382b594673719efdfb7fe4418752523ea860ef845dc7d933dce7316a70fb",
	            "SandboxKey": "/var/run/docker/netns/d663382b5946",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-561408": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:4a:01:6d:c4:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4f4a13c0b85cf53d05b4d14cdbcd2a320c735f036b2f0ba0e125d18fecb5483e",
	                    "EndpointID": "8bbe028a228bba720bf21f285bbd5a35394aa84a1b362b5a7a2870d444886ba4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-561408",
	                        "a52c329ec080"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-561408 -n no-preload-561408
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-561408 -n no-preload-561408: exit status 2 (402.284981ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-561408 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-561408 logs -n 25: (1.51981054s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-931932 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-577062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo crio config                                                                                                                                                                                                             │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p bridge-931932                                                                                                                                                                                                                              │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ stop    │ -p old-k8s-version-577062 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p disable-driver-mounts-591165                                                                                                                                                                                                               │ disable-driver-mounts-591165 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-561408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ stop    │ -p no-preload-561408 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-577062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p no-preload-561408 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p embed-certs-123864 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-999693 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-123864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-999693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ image   │ old-k8s-version-577062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p old-k8s-version-577062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ no-preload-561408 image list --format=json                                                                                                                                                                                                    │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p no-preload-561408 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:52:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:52:46.925201  664256 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:52:46.925511  664256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:52:46.925521  664256 out.go:374] Setting ErrFile to fd 2...
	I1019 12:52:46.925526  664256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:52:46.925724  664256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:52:46.926177  664256 out.go:368] Setting JSON to false
	I1019 12:52:46.927476  664256 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9315,"bootTime":1760869052,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:52:46.927572  664256 start.go:141] virtualization: kvm guest
	I1019 12:52:46.929196  664256 out.go:179] * [default-k8s-diff-port-999693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:52:46.930756  664256 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:52:46.930801  664256 notify.go:220] Checking for updates...
	I1019 12:52:46.932758  664256 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:52:46.934048  664256 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:46.935192  664256 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:52:46.936498  664256 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:52:46.937762  664256 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:52:46.939394  664256 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:46.939848  664256 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:52:46.963683  664256 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:52:46.963772  664256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:52:47.023378  664256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 12:52:47.013329476 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:52:47.023535  664256 docker.go:318] overlay module found
	I1019 12:52:47.025269  664256 out.go:179] * Using the docker driver based on existing profile
	I1019 12:52:47.026568  664256 start.go:305] selected driver: docker
	I1019 12:52:47.026597  664256 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:47.026732  664256 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:52:47.027471  664256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:52:47.086363  664256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 12:52:47.076802932 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:52:47.086679  664256 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:47.086707  664256 cni.go:84] Creating CNI manager for ""
	I1019 12:52:47.086755  664256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:47.086787  664256 start.go:349] cluster config:
	{Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:47.088476  664256 out.go:179] * Starting "default-k8s-diff-port-999693" primary control-plane node in "default-k8s-diff-port-999693" cluster
	I1019 12:52:47.089564  664256 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:52:47.090727  664256 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:52:47.091742  664256 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:47.091773  664256 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:52:47.091781  664256 cache.go:58] Caching tarball of preloaded images
	I1019 12:52:47.091796  664256 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:52:47.091859  664256 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:52:47.091870  664256 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:52:47.091959  664256 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/config.json ...
	I1019 12:52:47.112105  664256 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:52:47.112128  664256 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:52:47.112142  664256 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:52:47.112172  664256 start.go:360] acquireMachinesLock for default-k8s-diff-port-999693: {Name:mke26e7439408c8adecea1bbb9344a31dd77b3c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:47.112226  664256 start.go:364] duration metric: took 36.455µs to acquireMachinesLock for "default-k8s-diff-port-999693"
	I1019 12:52:47.112245  664256 start.go:96] Skipping create...Using existing machine configuration
	I1019 12:52:47.112252  664256 fix.go:54] fixHost starting: 
	I1019 12:52:47.112490  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:47.129772  664256 fix.go:112] recreateIfNeeded on default-k8s-diff-port-999693: state=Stopped err=<nil>
	W1019 12:52:47.129802  664256 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 12:52:44.281015  663517 out.go:252] * Restarting existing docker container for "embed-certs-123864" ...
	I1019 12:52:44.281101  663517 cli_runner.go:164] Run: docker start embed-certs-123864
	I1019 12:52:44.526509  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:44.546310  663517 kic.go:430] container "embed-certs-123864" state is running.
	I1019 12:52:44.546720  663517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:52:44.565833  663517 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/config.json ...
	I1019 12:52:44.566069  663517 machine.go:93] provisionDockerMachine start ...
	I1019 12:52:44.566147  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:44.585705  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:44.585938  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:44.585949  663517 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:52:44.586499  663517 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58104->127.0.0.1:33490: read: connection reset by peer
	I1019 12:52:47.734652  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-123864
	
	I1019 12:52:47.734694  663517 ubuntu.go:182] provisioning hostname "embed-certs-123864"
	I1019 12:52:47.734763  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:47.754305  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:47.754574  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:47.754594  663517 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-123864 && echo "embed-certs-123864" | sudo tee /etc/hostname
	I1019 12:52:47.900303  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-123864
	
	I1019 12:52:47.900379  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:47.918114  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:47.918334  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:47.918355  663517 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-123864' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-123864/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-123864' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:52:48.051196  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:52:48.051226  663517 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:52:48.051276  663517 ubuntu.go:190] setting up certificates
	I1019 12:52:48.051294  663517 provision.go:84] configureAuth start
	I1019 12:52:48.051351  663517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:52:48.069277  663517 provision.go:143] copyHostCerts
	I1019 12:52:48.069333  663517 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:52:48.069349  663517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:52:48.069433  663517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:52:48.069546  663517 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:52:48.069557  663517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:52:48.069604  663517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:52:48.069660  663517 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:52:48.069667  663517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:52:48.069692  663517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:52:48.069741  663517 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.embed-certs-123864 san=[127.0.0.1 192.168.76.2 embed-certs-123864 localhost minikube]
	I1019 12:52:48.585780  663517 provision.go:177] copyRemoteCerts
	I1019 12:52:48.585838  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:52:48.585871  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:48.604279  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:48.702233  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:52:48.720721  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:52:48.738512  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:52:48.755942  663517 provision.go:87] duration metric: took 704.627825ms to configureAuth
	I1019 12:52:48.755977  663517 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:52:48.756154  663517 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:48.756278  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:48.775133  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:48.775433  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:48.775459  663517 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:52:49.061359  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:52:49.061389  663517 machine.go:96] duration metric: took 4.495303282s to provisionDockerMachine
	I1019 12:52:49.061401  663517 start.go:293] postStartSetup for "embed-certs-123864" (driver="docker")
	I1019 12:52:49.061414  663517 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:52:49.061511  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:52:49.061564  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:47.787829  657553 pod_ready.go:94] pod "coredns-66bc5c9577-pgxlp" is "Ready"
	I1019 12:52:47.787855  657553 pod_ready.go:86] duration metric: took 31.504899877s for pod "coredns-66bc5c9577-pgxlp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.789711  657553 pod_ready.go:83] waiting for pod "etcd-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.793406  657553 pod_ready.go:94] pod "etcd-no-preload-561408" is "Ready"
	I1019 12:52:47.793446  657553 pod_ready.go:86] duration metric: took 3.709623ms for pod "etcd-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.795182  657553 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.798678  657553 pod_ready.go:94] pod "kube-apiserver-no-preload-561408" is "Ready"
	I1019 12:52:47.798700  657553 pod_ready.go:86] duration metric: took 3.496714ms for pod "kube-apiserver-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.800596  657553 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.986813  657553 pod_ready.go:94] pod "kube-controller-manager-no-preload-561408" is "Ready"
	I1019 12:52:47.986842  657553 pod_ready.go:86] duration metric: took 186.220802ms for pod "kube-controller-manager-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.186670  657553 pod_ready.go:83] waiting for pod "kube-proxy-lppwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.586865  657553 pod_ready.go:94] pod "kube-proxy-lppwp" is "Ready"
	I1019 12:52:48.586892  657553 pod_ready.go:86] duration metric: took 400.184165ms for pod "kube-proxy-lppwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.785758  657553 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.186913  657553 pod_ready.go:94] pod "kube-scheduler-no-preload-561408" is "Ready"
	I1019 12:52:49.186953  657553 pod_ready.go:86] duration metric: took 401.160394ms for pod "kube-scheduler-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.186968  657553 pod_ready.go:40] duration metric: took 32.907293647s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:49.233509  657553 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:52:49.235163  657553 out.go:179] * Done! kubectl is now configured to use "no-preload-561408" cluster and "default" namespace by default
	W1019 12:52:47.528927  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	I1019 12:52:48.027407  655442 pod_ready.go:94] pod "coredns-5dd5756b68-44mqv" is "Ready"
	I1019 12:52:48.027445  655442 pod_ready.go:86] duration metric: took 40.505181601s for pod "coredns-5dd5756b68-44mqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.030160  655442 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.034042  655442 pod_ready.go:94] pod "etcd-old-k8s-version-577062" is "Ready"
	I1019 12:52:48.034071  655442 pod_ready.go:86] duration metric: took 3.888307ms for pod "etcd-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.036741  655442 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.040245  655442 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-577062" is "Ready"
	I1019 12:52:48.040263  655442 pod_ready.go:86] duration metric: took 3.503128ms for pod "kube-apiserver-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.042393  655442 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.225329  655442 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-577062" is "Ready"
	I1019 12:52:48.225354  655442 pod_ready.go:86] duration metric: took 182.944102ms for pod "kube-controller-manager-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.426194  655442 pod_ready.go:83] waiting for pod "kube-proxy-lhths" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.826171  655442 pod_ready.go:94] pod "kube-proxy-lhths" is "Ready"
	I1019 12:52:48.826194  655442 pod_ready.go:86] duration metric: took 399.973598ms for pod "kube-proxy-lhths" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.025864  655442 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.425023  655442 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-577062" is "Ready"
	I1019 12:52:49.425051  655442 pod_ready.go:86] duration metric: took 399.16124ms for pod "kube-scheduler-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.425063  655442 pod_ready.go:40] duration metric: took 41.909017776s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:49.471302  655442 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1019 12:52:49.473153  655442 out.go:203] 
	W1019 12:52:49.474513  655442 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1019 12:52:49.475817  655442 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1019 12:52:49.477137  655442 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-577062" cluster and "default" namespace by default
	I1019 12:52:49.080598  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.176835  663517 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:52:49.180594  663517 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:52:49.180624  663517 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:52:49.180639  663517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:52:49.180704  663517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:52:49.180802  663517 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:52:49.180915  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:52:49.188874  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:49.207471  663517 start.go:296] duration metric: took 146.052119ms for postStartSetup
	I1019 12:52:49.207569  663517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:52:49.207618  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:49.227005  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.322539  663517 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:52:49.327981  663517 fix.go:56] duration metric: took 5.066251838s for fixHost
	I1019 12:52:49.328013  663517 start.go:83] releasing machines lock for "embed-certs-123864", held for 5.066315254s
	I1019 12:52:49.328080  663517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:52:49.348437  663517 ssh_runner.go:195] Run: cat /version.json
	I1019 12:52:49.348488  663517 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:52:49.348506  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:49.348561  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:49.368071  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.368417  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.525163  663517 ssh_runner.go:195] Run: systemctl --version
	I1019 12:52:49.534330  663517 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:52:49.578043  663517 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:52:49.583920  663517 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:52:49.583993  663517 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:52:49.593384  663517 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:52:49.593406  663517 start.go:495] detecting cgroup driver to use...
	I1019 12:52:49.593463  663517 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:52:49.593523  663517 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:52:49.612003  663517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:52:49.626574  663517 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:52:49.626639  663517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:52:49.641058  663517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:52:49.653880  663517 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:52:49.736282  663517 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:52:49.834377  663517 docker.go:234] disabling docker service ...
	I1019 12:52:49.834478  663517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:52:49.850898  663517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:52:49.864746  663517 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:52:49.939108  663517 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:52:50.014260  663517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:52:50.026706  663517 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:52:50.040656  663517 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:52:50.040725  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.049794  663517 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:52:50.049857  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.058814  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.067348  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.075837  663517 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:52:50.083843  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.092439  663517 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.100689  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.109083  663517 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:52:50.116037  663517 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:52:50.123017  663517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:50.196214  663517 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:52:50.304544  663517 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:52:50.304601  663517 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:52:50.308678  663517 start.go:563] Will wait 60s for crictl version
	I1019 12:52:50.308736  663517 ssh_runner.go:195] Run: which crictl
	I1019 12:52:50.312585  663517 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:52:50.336989  663517 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:52:50.337082  663517 ssh_runner.go:195] Run: crio --version
	I1019 12:52:50.365185  663517 ssh_runner.go:195] Run: crio --version
	I1019 12:52:50.395636  663517 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:52:50.396988  663517 cli_runner.go:164] Run: docker network inspect embed-certs-123864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:52:50.414563  663517 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 12:52:50.418760  663517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:50.429343  663517 kubeadm.go:883] updating cluster {Name:embed-certs-123864 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:52:50.429499  663517 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:50.429554  663517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:50.463514  663517 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:50.463537  663517 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:52:50.463585  663517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:50.489852  663517 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:50.489884  663517 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:52:50.489897  663517 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 12:52:50.490024  663517 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-123864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:52:50.490091  663517 ssh_runner.go:195] Run: crio config
	I1019 12:52:50.540351  663517 cni.go:84] Creating CNI manager for ""
	I1019 12:52:50.540379  663517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:50.540402  663517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:52:50.540455  663517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-123864 NodeName:embed-certs-123864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:52:50.540626  663517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-123864"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:52:50.540708  663517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:52:50.548975  663517 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:52:50.549037  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:52:50.556535  663517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1019 12:52:50.569078  663517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:52:50.582078  663517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1019 12:52:50.594598  663517 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:52:50.598683  663517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:50.609655  663517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:50.691984  663517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:50.714791  663517 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864 for IP: 192.168.76.2
	I1019 12:52:50.714813  663517 certs.go:195] generating shared ca certs ...
	I1019 12:52:50.714830  663517 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:50.714977  663517 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:52:50.715024  663517 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:52:50.715035  663517 certs.go:257] generating profile certs ...
	I1019 12:52:50.715113  663517 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/client.key
	I1019 12:52:50.715153  663517 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key.ef142c6b
	I1019 12:52:50.715189  663517 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.key
	I1019 12:52:50.715286  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:52:50.715311  663517 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:52:50.715320  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:52:50.715340  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:52:50.715362  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:52:50.715384  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:52:50.715443  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:50.716041  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:52:50.735271  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:52:50.755214  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:52:50.777014  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:52:50.800199  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 12:52:50.821324  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:52:50.839279  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:52:50.856965  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:52:50.874445  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:52:50.891496  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:52:50.908559  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:52:50.927767  663517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:52:50.941573  663517 ssh_runner.go:195] Run: openssl version
	I1019 12:52:50.947724  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:52:50.956196  663517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:52:50.959953  663517 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:52:50.960001  663517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:52:50.995897  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:52:51.005114  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:52:51.013652  663517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:51.017476  663517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:51.017521  663517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:51.051306  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:52:51.059843  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:52:51.068625  663517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:52:51.072364  663517 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:52:51.072434  663517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:52:51.106768  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:52:51.115327  663517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:52:51.119266  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:52:51.155239  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:52:51.191302  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:52:51.231935  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:52:51.281478  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:52:51.335604  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:52:51.389971  663517 kubeadm.go:400] StartCluster: {Name:embed-certs-123864 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:51.390086  663517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:52:51.390161  663517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:52:51.427193  663517 cri.go:89] found id: "0d6bd37e74ce4fd54de1cf8e27fcb93f0da4eae636f80ecf509c242bba0ab6b4"
	I1019 12:52:51.427217  663517 cri.go:89] found id: "2948778c0277b5d716b5581d32565f17755bd979469128c13d911b54b47927ea"
	I1019 12:52:51.427222  663517 cri.go:89] found id: "f0fd8fcb3c6d87abb5a73bdbe32675387cdf9b39fb23cc80e3f9fcee156b57fc"
	I1019 12:52:51.427225  663517 cri.go:89] found id: "ce30ef8a95f35deb3f080b7ea813df6a93693594ac7959d6e3a0b79159f36e25"
	I1019 12:52:51.427228  663517 cri.go:89] found id: ""
	I1019 12:52:51.427267  663517 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:52:51.440120  663517 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:52:51Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:52:51.440220  663517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:52:51.449733  663517 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:52:51.449753  663517 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:52:51.449805  663517 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:52:51.458169  663517 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:52:51.459058  663517 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-123864" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:51.459546  663517 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-351705/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-123864" cluster setting kubeconfig missing "embed-certs-123864" context setting]
	I1019 12:52:51.460311  663517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:51.462264  663517 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:52:51.470636  663517 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 12:52:51.470666  663517 kubeadm.go:601] duration metric: took 20.906449ms to restartPrimaryControlPlane
	I1019 12:52:51.470676  663517 kubeadm.go:402] duration metric: took 80.715661ms to StartCluster
	I1019 12:52:51.470710  663517 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:51.470784  663517 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:51.472656  663517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:51.472905  663517 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:52:51.473029  663517 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:52:51.473122  663517 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-123864"
	I1019 12:52:51.473142  663517 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-123864"
	W1019 12:52:51.473150  663517 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:52:51.473154  663517 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:51.473167  663517 addons.go:69] Setting dashboard=true in profile "embed-certs-123864"
	I1019 12:52:51.473186  663517 addons.go:238] Setting addon dashboard=true in "embed-certs-123864"
	I1019 12:52:51.473190  663517 host.go:66] Checking if "embed-certs-123864" exists ...
	W1019 12:52:51.473196  663517 addons.go:247] addon dashboard should already be in state true
	I1019 12:52:51.473194  663517 addons.go:69] Setting default-storageclass=true in profile "embed-certs-123864"
	I1019 12:52:51.473226  663517 host.go:66] Checking if "embed-certs-123864" exists ...
	I1019 12:52:51.473225  663517 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-123864"
	I1019 12:52:51.473582  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.473805  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.473960  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.476597  663517 out.go:179] * Verifying Kubernetes components...
	I1019 12:52:51.479247  663517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:51.500794  663517 addons.go:238] Setting addon default-storageclass=true in "embed-certs-123864"
	W1019 12:52:51.500880  663517 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:52:51.500970  663517 host.go:66] Checking if "embed-certs-123864" exists ...
	I1019 12:52:51.501574  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.502354  663517 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:52:51.503126  663517 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 12:52:51.503854  663517 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:51.503891  663517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:52:51.503970  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:51.505618  663517 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 12:52:47.131514  664256 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-999693" ...
	I1019 12:52:47.131575  664256 cli_runner.go:164] Run: docker start default-k8s-diff-port-999693
	I1019 12:52:47.384629  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:47.402936  664256 kic.go:430] container "default-k8s-diff-port-999693" state is running.
	I1019 12:52:47.403379  664256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-999693
	I1019 12:52:47.423463  664256 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/config.json ...
	I1019 12:52:47.423767  664256 machine.go:93] provisionDockerMachine start ...
	I1019 12:52:47.423874  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:47.444517  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:47.444842  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:47.444866  664256 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:52:47.445518  664256 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41262->127.0.0.1:33495: read: connection reset by peer
	I1019 12:52:50.583537  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-999693
	
	I1019 12:52:50.583567  664256 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-999693"
	I1019 12:52:50.583650  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:50.604186  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:50.604410  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:50.604444  664256 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-999693 && echo "default-k8s-diff-port-999693" | sudo tee /etc/hostname
	I1019 12:52:50.751627  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-999693
	
	I1019 12:52:50.751775  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:50.773964  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:50.774248  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:50.774277  664256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-999693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-999693/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-999693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:52:50.913745  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:52:50.913786  664256 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:52:50.913836  664256 ubuntu.go:190] setting up certificates
	I1019 12:52:50.913870  664256 provision.go:84] configureAuth start
	I1019 12:52:50.913952  664256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-999693
	I1019 12:52:50.934395  664256 provision.go:143] copyHostCerts
	I1019 12:52:50.934470  664256 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:52:50.934487  664256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:52:50.934554  664256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:52:50.934664  664256 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:52:50.934673  664256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:52:50.934711  664256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:52:50.934808  664256 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:52:50.934820  664256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:52:50.934849  664256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:52:50.934971  664256 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-999693 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-999693 localhost minikube]
	I1019 12:52:51.181197  664256 provision.go:177] copyRemoteCerts
	I1019 12:52:51.181259  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:52:51.181302  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.200908  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:51.299582  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:52:51.321298  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 12:52:51.347057  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:52:51.372503  664256 provision.go:87] duration metric: took 458.610195ms to configureAuth
	I1019 12:52:51.372536  664256 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:52:51.372758  664256 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:51.372944  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.397897  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:51.398221  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:51.398253  664256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:52:51.787740  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:52:51.787770  664256 machine.go:96] duration metric: took 4.36398321s to provisionDockerMachine
	I1019 12:52:51.787784  664256 start.go:293] postStartSetup for "default-k8s-diff-port-999693" (driver="docker")
	I1019 12:52:51.787799  664256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:52:51.787891  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:52:51.787950  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.813780  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:51.920668  664256 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:52:51.925324  664256 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:52:51.925357  664256 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:52:51.925370  664256 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:52:51.925448  664256 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:52:51.925552  664256 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:52:51.925688  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:52:51.936356  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:51.957175  664256 start.go:296] duration metric: took 169.373131ms for postStartSetup
	I1019 12:52:51.957258  664256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:52:51.957327  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.980799  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:52.081065  664256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:52:52.087117  664256 fix.go:56] duration metric: took 4.974857045s for fixHost
	I1019 12:52:52.087152  664256 start.go:83] releasing machines lock for "default-k8s-diff-port-999693", held for 4.974914543s
	I1019 12:52:52.087228  664256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-999693
	I1019 12:52:52.111457  664256 ssh_runner.go:195] Run: cat /version.json
	I1019 12:52:52.111517  664256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:52:52.111598  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:52.111518  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:52.137014  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:52.137025  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:52.314908  664256 ssh_runner.go:195] Run: systemctl --version
	I1019 12:52:52.323209  664256 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:52:52.366367  664256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:52:52.371765  664256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:52:52.371833  664256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:52:52.381186  664256 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:52:52.381210  664256 start.go:495] detecting cgroup driver to use...
	I1019 12:52:52.381243  664256 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:52:52.381290  664256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:52:52.399404  664256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:52:52.414594  664256 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:52:52.414655  664256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:52:52.432231  664256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:52:52.447748  664256 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:52:52.544771  664256 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:52:52.640880  664256 docker.go:234] disabling docker service ...
	I1019 12:52:52.640958  664256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:52:52.658680  664256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:52:52.672412  664256 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:52:52.769106  664256 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:52:52.884868  664256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:52:52.906499  664256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:52:52.933714  664256 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:52:52.933784  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.948702  664256 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:52:52.948841  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.962681  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.976376  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.993092  664256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:52:53.001841  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:53.017733  664256 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:53.032955  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:53.050801  664256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:52:53.067622  664256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:52:53.083829  664256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:53.206267  664256 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:52:53.349143  664256 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:52:53.349212  664256 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:52:53.355228  664256 start.go:563] Will wait 60s for crictl version
	I1019 12:52:53.355416  664256 ssh_runner.go:195] Run: which crictl
	I1019 12:52:53.361171  664256 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:52:53.398217  664256 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:52:53.398309  664256 ssh_runner.go:195] Run: crio --version
	I1019 12:52:53.428293  664256 ssh_runner.go:195] Run: crio --version
	I1019 12:52:53.468822  664256 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:52:51.507351  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 12:52:51.507377  663517 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:52:51.507478  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:51.528518  663517 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:51.528547  663517 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:51.528609  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:51.529319  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:51.537540  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:51.560844  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:51.652064  663517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:51.659469  663517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:51.665965  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:52:51.665989  663517 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:52:51.672138  663517 node_ready.go:35] waiting up to 6m0s for node "embed-certs-123864" to be "Ready" ...
	I1019 12:52:51.685068  663517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:51.686285  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:52:51.686312  663517 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:52:51.706556  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:52:51.706583  663517 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:52:51.726874  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:52:51.726898  663517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:52:51.745384  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:52:51.745410  663517 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:52:51.761707  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:52:51.761733  663517 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:52:51.779101  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:52:51.779128  663517 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:52:51.797377  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:52:51.797405  663517 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:52:51.812263  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:51.812286  663517 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:52:51.829889  663517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:53.072809  663517 node_ready.go:49] node "embed-certs-123864" is "Ready"
	I1019 12:52:53.072851  663517 node_ready.go:38] duration metric: took 1.400666832s for node "embed-certs-123864" to be "Ready" ...
	I1019 12:52:53.072871  663517 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:53.072920  663517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:53.700121  663517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.040605714s)
	I1019 12:52:53.700176  663517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.01507119s)
	I1019 12:52:53.700245  663517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.870328808s)
	I1019 12:52:53.700294  663517 api_server.go:72] duration metric: took 2.22734911s to wait for apiserver process to appear ...
	I1019 12:52:53.700347  663517 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:53.700370  663517 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:53.702124  663517 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-123864 addons enable metrics-server
	
	I1019 12:52:53.707464  663517 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:53.707492  663517 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:53.714665  663517 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 12:52:53.716036  663517 addons.go:514] duration metric: took 2.243010209s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 12:52:53.470131  664256 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-999693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:52:53.492572  664256 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 12:52:53.498533  664256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:53.511548  664256 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:52:53.511704  664256 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:53.511776  664256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:53.554672  664256 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:53.554693  664256 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:52:53.554740  664256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:53.588812  664256 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:53.588842  664256 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:52:53.588852  664256 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1019 12:52:53.588996  664256 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-999693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:52:53.589088  664256 ssh_runner.go:195] Run: crio config
	I1019 12:52:53.643663  664256 cni.go:84] Creating CNI manager for ""
	I1019 12:52:53.643692  664256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:53.643715  664256 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:52:53.643745  664256 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-999693 NodeName:default-k8s-diff-port-999693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:52:53.643935  664256 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-999693"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:52:53.644016  664256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:52:53.652520  664256 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:52:53.652594  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:52:53.660846  664256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1019 12:52:53.674227  664256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:52:53.687240  664256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1019 12:52:53.700930  664256 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:52:53.705067  664256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:53.717166  664256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:53.801260  664256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:53.825321  664256 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693 for IP: 192.168.85.2
	I1019 12:52:53.825347  664256 certs.go:195] generating shared ca certs ...
	I1019 12:52:53.825370  664256 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:53.825553  664256 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:52:53.825597  664256 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:52:53.825608  664256 certs.go:257] generating profile certs ...
	I1019 12:52:53.825725  664256 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/client.key
	I1019 12:52:53.825803  664256 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/apiserver.key.8ef1e1bb
	I1019 12:52:53.825855  664256 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/proxy-client.key
	I1019 12:52:53.826004  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:52:53.826045  664256 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:52:53.826057  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:52:53.826084  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:52:53.826120  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:52:53.826159  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:52:53.826218  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:53.827044  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:52:53.850305  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:52:53.874056  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:52:53.900302  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:52:53.924868  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1019 12:52:53.943707  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 12:52:53.960778  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:52:53.977601  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 12:52:53.994887  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:52:54.012296  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:52:54.038626  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:52:54.063497  664256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:52:54.079249  664256 ssh_runner.go:195] Run: openssl version
	I1019 12:52:54.086057  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:52:54.097143  664256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:54.102203  664256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:54.102259  664256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:54.158908  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:52:54.169449  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:52:54.182754  664256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:52:54.188730  664256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:52:54.188802  664256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:52:54.244383  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:52:54.254644  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:52:54.263550  664256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:52:54.267515  664256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:52:54.267578  664256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:52:54.304899  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:52:54.313985  664256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:52:54.317801  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:52:54.360081  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:52:54.405761  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:52:54.464318  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:52:54.525359  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:52:54.563734  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:52:54.608045  664256 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:54.608169  664256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:52:54.608231  664256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:52:54.649470  664256 cri.go:89] found id: "7387a9f9039b6043f8b791c29478a2e313a9c1d07804c55f3bd42e18a02230e4"
	I1019 12:52:54.649495  664256 cri.go:89] found id: "dc93d8bd2fb474180164b7ca4cdad0cbca1bb12056f2ec0109f0fdd3eaff8e74"
	I1019 12:52:54.649501  664256 cri.go:89] found id: "386f63ea17ece706be504558369a24b364237cf65e614304f2e3a200660b929a"
	I1019 12:52:54.649506  664256 cri.go:89] found id: "3d2737d35156d50ddf2521cf937a27d4a3882183759b5bedf15ae21799bc69b0"
	I1019 12:52:54.649511  664256 cri.go:89] found id: ""
	I1019 12:52:54.649557  664256 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:52:54.665837  664256 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:52:54Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:52:54.665908  664256 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:52:54.677684  664256 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:52:54.677708  664256 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:52:54.677757  664256 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:52:54.687556  664256 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:52:54.689468  664256 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-999693" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:54.690566  664256 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-351705/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-999693" cluster setting kubeconfig missing "default-k8s-diff-port-999693" context setting]
	I1019 12:52:54.691940  664256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:54.694639  664256 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:52:54.705918  664256 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 12:52:54.705949  664256 kubeadm.go:601] duration metric: took 28.235813ms to restartPrimaryControlPlane
	I1019 12:52:54.705960  664256 kubeadm.go:402] duration metric: took 97.926007ms to StartCluster
	I1019 12:52:54.705977  664256 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:54.706033  664256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:54.708821  664256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:54.709325  664256 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:52:54.709463  664256 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-999693"
	I1019 12:52:54.709490  664256 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-999693"
	W1019 12:52:54.709502  664256 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:52:54.709534  664256 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:52:54.709617  664256 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:54.709548  664256 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:52:54.709808  664256 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-999693"
	I1019 12:52:54.710141  664256 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-999693"
	W1019 12:52:54.710161  664256 addons.go:247] addon dashboard should already be in state true
	I1019 12:52:54.710191  664256 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:52:54.711868  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.712514  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.709821  664256 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-999693"
	I1019 12:52:54.713522  664256 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-999693"
	I1019 12:52:54.713860  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.714625  664256 out.go:179] * Verifying Kubernetes components...
	I1019 12:52:54.715871  664256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:54.746297  664256 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 12:52:54.747517  664256 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:52:54.747552  664256 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 12:52:54.749165  664256 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-999693"
	I1019 12:52:54.749177  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	W1019 12:52:54.749186  664256 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:52:54.749191  664256 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:52:54.749216  664256 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:52:54.749232  664256 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:54.749245  664256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:52:54.749256  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:54.749306  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:54.749711  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.783580  664256 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:54.783608  664256 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:54.783676  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:54.787579  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:54.788172  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:54.817481  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:54.916555  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:52:54.916589  664256 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:52:54.918652  664256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:54.921391  664256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:54.939730  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:52:54.939840  664256 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:52:54.940294  664256 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-999693" to be "Ready" ...
	I1019 12:52:54.941172  664256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:54.960699  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:52:54.960783  664256 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:52:54.976260  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:52:54.976341  664256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:52:54.996375  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:52:54.996401  664256 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:52:55.017050  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:52:55.017079  664256 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:52:55.033603  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:52:55.033632  664256 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:52:55.048007  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:52:55.048032  664256 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:52:55.063077  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:55.063102  664256 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:52:55.078449  664256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:56.495857  664256 node_ready.go:49] node "default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:56.495897  664256 node_ready.go:38] duration metric: took 1.555549648s for node "default-k8s-diff-port-999693" to be "Ready" ...
	I1019 12:52:56.495915  664256 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:56.495982  664256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:57.096998  664256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.175567368s)
	I1019 12:52:57.097030  664256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.155826931s)
	I1019 12:52:57.097189  664256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.018704195s)
	I1019 12:52:57.097307  664256 api_server.go:72] duration metric: took 2.387607096s to wait for apiserver process to appear ...
	I1019 12:52:57.097327  664256 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:57.097348  664256 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:57.100178  664256 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-999693 addons enable metrics-server
	
	I1019 12:52:57.102943  664256 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:57.102968  664256 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:57.105461  664256 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 12:52:54.200764  663517 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:54.206405  663517 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:54.206480  663517 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:54.701368  663517 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:54.709189  663517 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 12:52:54.710714  663517 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:54.710735  663517 api_server.go:131] duration metric: took 1.010380706s to wait for apiserver health ...
	I1019 12:52:54.710745  663517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:54.721732  663517 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:54.721787  663517 system_pods.go:61] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:54.721804  663517 system_pods.go:61] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:54.721814  663517 system_pods.go:61] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:54.721826  663517 system_pods.go:61] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:54.721838  663517 system_pods.go:61] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:54.721893  663517 system_pods.go:61] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:54.721905  663517 system_pods.go:61] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:54.721926  663517 system_pods.go:61] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:54.721934  663517 system_pods.go:74] duration metric: took 11.182501ms to wait for pod list to return data ...
	I1019 12:52:54.721949  663517 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:54.728320  663517 default_sa.go:45] found service account: "default"
	I1019 12:52:54.728404  663517 default_sa.go:55] duration metric: took 6.446433ms for default service account to be created ...
	I1019 12:52:54.728450  663517 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:54.742048  663517 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:54.742087  663517 system_pods.go:89] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:54.742747  663517 system_pods.go:89] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:54.743381  663517 system_pods.go:89] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:54.743410  663517 system_pods.go:89] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:54.743900  663517 system_pods.go:89] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:54.744078  663517 system_pods.go:89] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:54.744455  663517 system_pods.go:89] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:54.744805  663517 system_pods.go:89] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:54.744821  663517 system_pods.go:126] duration metric: took 16.360253ms to wait for k8s-apps to be running ...
	I1019 12:52:54.745172  663517 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:54.745631  663517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:54.769658  663517 system_svc.go:56] duration metric: took 24.811398ms WaitForService to wait for kubelet
	I1019 12:52:54.769727  663517 kubeadm.go:586] duration metric: took 3.296760449s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:54.769750  663517 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:54.773633  663517 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:54.773745  663517 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:54.773776  663517 node_conditions.go:105] duration metric: took 4.019851ms to run NodePressure ...
	I1019 12:52:54.773995  663517 start.go:241] waiting for startup goroutines ...
	I1019 12:52:54.774026  663517 start.go:246] waiting for cluster config update ...
	I1019 12:52:54.774043  663517 start.go:255] writing updated cluster config ...
	I1019 12:52:54.774837  663517 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:54.781544  663517 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:54.790057  663517 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:52:56.796654  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:52:57.109849  664256 addons.go:514] duration metric: took 2.400528693s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 12:52:57.598353  664256 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:57.604765  664256 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:57.604814  664256 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:58.098137  664256 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:58.103228  664256 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1019 12:52:58.104494  664256 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:58.104523  664256 api_server.go:131] duration metric: took 1.007188483s to wait for apiserver health ...
	I1019 12:52:58.104535  664256 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:58.108083  664256 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:58.108110  664256 system_pods.go:61] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:58.108118  664256 system_pods.go:61] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:58.108124  664256 system_pods.go:61] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:58.108130  664256 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:58.108142  664256 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:58.108150  664256 system_pods.go:61] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:58.108159  664256 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:58.108168  664256 system_pods.go:61] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Running
	I1019 12:52:58.108179  664256 system_pods.go:74] duration metric: took 3.637436ms to wait for pod list to return data ...
	I1019 12:52:58.108192  664256 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:58.110578  664256 default_sa.go:45] found service account: "default"
	I1019 12:52:58.110596  664256 default_sa.go:55] duration metric: took 2.39546ms for default service account to be created ...
	I1019 12:52:58.110604  664256 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:58.113444  664256 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:58.113473  664256 system_pods.go:89] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:58.113485  664256 system_pods.go:89] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:58.113496  664256 system_pods.go:89] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:58.113516  664256 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:58.113527  664256 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:58.113534  664256 system_pods.go:89] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:58.113539  664256 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:58.113545  664256 system_pods.go:89] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Running
	I1019 12:52:58.113553  664256 system_pods.go:126] duration metric: took 2.943742ms to wait for k8s-apps to be running ...
	I1019 12:52:58.113563  664256 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:58.113613  664256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:58.128579  664256 system_svc.go:56] duration metric: took 15.004824ms WaitForService to wait for kubelet
	I1019 12:52:58.128609  664256 kubeadm.go:586] duration metric: took 3.418911937s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:58.128632  664256 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:58.131784  664256 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:58.131819  664256 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:58.131832  664256 node_conditions.go:105] duration metric: took 3.194851ms to run NodePressure ...
	I1019 12:52:58.131843  664256 start.go:241] waiting for startup goroutines ...
	I1019 12:52:58.131850  664256 start.go:246] waiting for cluster config update ...
	I1019 12:52:58.131862  664256 start.go:255] writing updated cluster config ...
	I1019 12:52:58.132300  664256 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:58.136574  664256 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:58.140912  664256 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:53:00.147567  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.767103158Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9dcdb517ab0da35aa313d6a637ad2984679c0bfbe61b4cfe2348233171c54c2f/merged/etc/passwd: no such file or directory"
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.767138827Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9dcdb517ab0da35aa313d6a637ad2984679c0bfbe61b4cfe2348233171c54c2f/merged/etc/group: no such file or directory"
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.768499927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.771385476Z" level=info msg="Removed container 6bb9fca8cb91e92c634a0fe57c08beb4f3fbe3bb2b9300a3533d146d5079c6f6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4/dashboard-metrics-scraper" id=cefce425-1dfb-449e-b495-62a084c199d9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.799156002Z" level=info msg="Created container ea70d04b3723054d0048f663e93576611305094165f6e15c68c81dddbc07caf0: kube-system/storage-provisioner/storage-provisioner" id=28d03808-4d95-44da-8b4d-eb02953c93a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.79992987Z" level=info msg="Starting container: ea70d04b3723054d0048f663e93576611305094165f6e15c68c81dddbc07caf0" id=7731cb81-9988-443b-88fe-82145540a3f7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.801841005Z" level=info msg="Started container" PID=1694 containerID=ea70d04b3723054d0048f663e93576611305094165f6e15c68c81dddbc07caf0 description=kube-system/storage-provisioner/storage-provisioner id=7731cb81-9988-443b-88fe-82145540a3f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e414a2747b0a0810e9f18b34d6dcc3a19cfd31694df3baf68c8c127c15fa677e
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.425212745Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.430149094Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.430181533Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.43020865Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.434191101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.434229779Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.434252989Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.438374145Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.438439202Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.438469422Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.442810019Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.442839199Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.442864037Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.454839725Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.45489265Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.455092725Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.465837099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.46589081Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ea70d04b37230       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   e414a2747b0a0       storage-provisioner                          kube-system
	df77f4d327ae8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   fadf510e5eab1       dashboard-metrics-scraper-6ffb444bf9-lrrh4   kubernetes-dashboard
	5799985fefa34       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   1bdf8a843608a       kubernetes-dashboard-855c9754f9-hm7lm        kubernetes-dashboard
	71ca7ab6923e9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   49e3167c49b25       busybox                                      default
	2f726a5e2a456       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   01c9fc5a5722d       coredns-66bc5c9577-pgxlp                     kube-system
	e4ca43f4f6043       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   d207181e75c7c       kube-proxy-lppwp                             kube-system
	020c85d371fff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   e414a2747b0a0       storage-provisioner                          kube-system
	063e2ede2fb5d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   e66b9326f36ac       kindnet-kq4cq                                kube-system
	6c259b4325350       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   f369fdec1c4c8       etcd-no-preload-561408                       kube-system
	f7b8547c0e922       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   a9f277219620e       kube-scheduler-no-preload-561408             kube-system
	9090a5b4e67c9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   9ad86104630e9       kube-controller-manager-no-preload-561408    kube-system
	01ed9d93f2579       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   9a254410b4804       kube-apiserver-no-preload-561408             kube-system
	
	
	==> coredns [2f726a5e2a456524d90c9f4cabeb7cf0ba8039f3ba6d55bd262c7f75669065fb] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37439 - 58452 "HINFO IN 3512829246426565864.6072171021658419229. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053714122s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-561408
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-561408
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=no-preload-561408
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_51_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:51:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-561408
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:52:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:52:45 +0000   Sun, 19 Oct 2025 12:51:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:52:45 +0000   Sun, 19 Oct 2025 12:51:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:52:45 +0000   Sun, 19 Oct 2025 12:51:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:52:45 +0000   Sun, 19 Oct 2025 12:52:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-561408
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                7f18081e-0db1-4ca2-b083-85e9821fdde2
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-pgxlp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-no-preload-561408                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-kq4cq                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-no-preload-561408              250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-no-preload-561408     200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-lppwp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-no-preload-561408              100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lrrh4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hm7lm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node no-preload-561408 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node no-preload-561408 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node no-preload-561408 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node no-preload-561408 event: Registered Node no-preload-561408 in Controller
	  Normal  NodeReady                90s                kubelet          Node no-preload-561408 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node no-preload-561408 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node no-preload-561408 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node no-preload-561408 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node no-preload-561408 event: Registered Node no-preload-561408 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [6c259b4325350a6198e9a1d8d0eac556ea213104568525890a93d7a828893ce4] <==
	{"level":"warn","ts":"2025-10-19T12:52:14.074335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.081538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.089948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.096000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.101900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.108262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.115207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.122313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.131132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.137665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.145548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.152694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.158935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.166553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.172781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.178945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.187308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.193313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.199623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.206839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.212800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.219099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.237924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.245090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.293550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50884","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:53:04 up  2:35,  0 user,  load average: 4.67, 4.81, 3.10
	Linux no-preload-561408 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [063e2ede2fb5d7efd8c012dc8a326dea1655039e3c63f156dbcc015d3aa6d400] <==
	I1019 12:52:16.224141       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:52:16.224407       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1019 12:52:16.224647       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:52:16.224671       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:52:16.224708       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:52:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:52:16.424305       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:52:16.424330       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:52:16.424344       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:52:16.424748       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 12:52:46.424562       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 12:52:46.424686       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 12:52:46.424874       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 12:52:46.425024       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1019 12:52:48.025245       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:52:48.025282       1 metrics.go:72] Registering metrics
	I1019 12:52:48.025367       1 controller.go:711] "Syncing nftables rules"
	I1019 12:52:56.424877       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1019 12:52:56.424954       1 main.go:301] handling current node
	
	
	==> kube-apiserver [01ed9d93f2579a1ea122d6b57e30a1236b2a3f66e97860cfecc6148cae01a115] <==
	I1019 12:52:14.774773       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 12:52:14.774791       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 12:52:14.774833       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 12:52:14.775215       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 12:52:14.775243       1 aggregator.go:171] initial CRD sync complete...
	I1019 12:52:14.775255       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 12:52:14.775262       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 12:52:14.775268       1 cache.go:39] Caches are synced for autoregister controller
	I1019 12:52:14.779914       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1019 12:52:14.780058       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 12:52:14.784587       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 12:52:14.784673       1 policy_source.go:240] refreshing policies
	I1019 12:52:14.828036       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:52:14.829009       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:52:15.046328       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:52:15.074367       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:52:15.092776       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:52:15.098538       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:52:15.105197       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:52:15.135864       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.86.91"}
	I1019 12:52:15.145605       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.86.221"}
	I1019 12:52:15.677144       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:52:18.527916       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:52:18.625494       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:52:18.727129       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [9090a5b4e67c95d31bf16d2ca089106db1a0761e43d712e00a8bf33bc963353d] <==
	I1019 12:52:18.172599       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-561408"
	I1019 12:52:18.172658       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 12:52:18.172779       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:52:18.172888       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 12:52:18.172946       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:52:18.173260       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 12:52:18.173308       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:52:18.173387       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 12:52:18.175661       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 12:52:18.177200       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 12:52:18.178863       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:52:18.179142       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 12:52:18.179273       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:52:18.179300       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 12:52:18.185521       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:52:18.185539       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 12:52:18.185547       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 12:52:18.190741       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 12:52:18.191683       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:52:18.191698       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 12:52:18.192857       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 12:52:18.195102       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 12:52:18.198403       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:52:18.198458       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:52:18.200525       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [e4ca43f4f6043f242e54cacc117ecafdddba7c52f5e782eaac1f1a294095d562] <==
	I1019 12:52:16.062578       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:52:16.117415       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:52:16.218377       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:52:16.218412       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1019 12:52:16.218519       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:52:16.237880       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:52:16.237937       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:52:16.242845       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:52:16.243272       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:52:16.243309       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:16.246290       1 config.go:200] "Starting service config controller"
	I1019 12:52:16.246312       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:52:16.246343       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:52:16.246350       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:52:16.246392       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:52:16.246462       1 config.go:309] "Starting node config controller"
	I1019 12:52:16.246481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:52:16.246489       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:52:16.246652       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:52:16.346861       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:52:16.346905       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:52:16.346994       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f7b8547c0e92276ea4aa3de0d1355f2d469801e321a4bd5e24851ac65d15e3d7] <==
	I1019 12:52:13.501226       1 serving.go:386] Generated self-signed cert in-memory
	W1019 12:52:14.695726       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 12:52:14.695776       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 12:52:14.695789       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 12:52:14.695797       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 12:52:14.729288       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 12:52:14.729323       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:14.736355       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:14.736388       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:14.737300       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:52:14.737690       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 12:52:14.836762       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:52:15 no-preload-561408 kubelet[706]: I1019 12:52:15.745107     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e5712d3-d393-4b98-8346-442229d87b07-xtables-lock\") pod \"kindnet-kq4cq\" (UID: \"1e5712d3-d393-4b98-8346-442229d87b07\") " pod="kube-system/kindnet-kq4cq"
	Oct 19 12:52:18 no-preload-561408 kubelet[706]: I1019 12:52:18.865066     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb87p\" (UniqueName: \"kubernetes.io/projected/07c4ccb8-982b-4055-8676-f081e5190ce4-kube-api-access-tb87p\") pod \"kubernetes-dashboard-855c9754f9-hm7lm\" (UID: \"07c4ccb8-982b-4055-8676-f081e5190ce4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm7lm"
	Oct 19 12:52:18 no-preload-561408 kubelet[706]: I1019 12:52:18.865144     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/07c4ccb8-982b-4055-8676-f081e5190ce4-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hm7lm\" (UID: \"07c4ccb8-982b-4055-8676-f081e5190ce4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm7lm"
	Oct 19 12:52:18 no-preload-561408 kubelet[706]: I1019 12:52:18.865199     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a75ac11f-ac61-469e-8fa3-20312154a189-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-lrrh4\" (UID: \"a75ac11f-ac61-469e-8fa3-20312154a189\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4"
	Oct 19 12:52:18 no-preload-561408 kubelet[706]: I1019 12:52:18.865279     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzqc7\" (UniqueName: \"kubernetes.io/projected/a75ac11f-ac61-469e-8fa3-20312154a189-kube-api-access-mzqc7\") pod \"dashboard-metrics-scraper-6ffb444bf9-lrrh4\" (UID: \"a75ac11f-ac61-469e-8fa3-20312154a189\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4"
	Oct 19 12:52:24 no-preload-561408 kubelet[706]: I1019 12:52:24.762322     706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm7lm" podStartSLOduration=3.042860041 podStartE2EDuration="6.762283723s" podCreationTimestamp="2025-10-19 12:52:18 +0000 UTC" firstStartedPulling="2025-10-19 12:52:19.120033557 +0000 UTC m=+6.554615620" lastFinishedPulling="2025-10-19 12:52:22.839457222 +0000 UTC m=+10.274039302" observedRunningTime="2025-10-19 12:52:23.756684629 +0000 UTC m=+11.191266697" watchObservedRunningTime="2025-10-19 12:52:24.762283723 +0000 UTC m=+12.196865806"
	Oct 19 12:52:25 no-preload-561408 kubelet[706]: I1019 12:52:25.702307     706 scope.go:117] "RemoveContainer" containerID="c22f77748bb61f6fc3f9db7dba2352ad956c10339941579456a85d86f80d7cb2"
	Oct 19 12:52:26 no-preload-561408 kubelet[706]: I1019 12:52:26.706174     706 scope.go:117] "RemoveContainer" containerID="c22f77748bb61f6fc3f9db7dba2352ad956c10339941579456a85d86f80d7cb2"
	Oct 19 12:52:26 no-preload-561408 kubelet[706]: I1019 12:52:26.706345     706 scope.go:117] "RemoveContainer" containerID="6bb9fca8cb91e92c634a0fe57c08beb4f3fbe3bb2b9300a3533d146d5079c6f6"
	Oct 19 12:52:26 no-preload-561408 kubelet[706]: E1019 12:52:26.706629     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrrh4_kubernetes-dashboard(a75ac11f-ac61-469e-8fa3-20312154a189)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4" podUID="a75ac11f-ac61-469e-8fa3-20312154a189"
	Oct 19 12:52:27 no-preload-561408 kubelet[706]: I1019 12:52:27.710028     706 scope.go:117] "RemoveContainer" containerID="6bb9fca8cb91e92c634a0fe57c08beb4f3fbe3bb2b9300a3533d146d5079c6f6"
	Oct 19 12:52:27 no-preload-561408 kubelet[706]: E1019 12:52:27.710196     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrrh4_kubernetes-dashboard(a75ac11f-ac61-469e-8fa3-20312154a189)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4" podUID="a75ac11f-ac61-469e-8fa3-20312154a189"
	Oct 19 12:52:35 no-preload-561408 kubelet[706]: I1019 12:52:35.102198     706 scope.go:117] "RemoveContainer" containerID="6bb9fca8cb91e92c634a0fe57c08beb4f3fbe3bb2b9300a3533d146d5079c6f6"
	Oct 19 12:52:35 no-preload-561408 kubelet[706]: E1019 12:52:35.102439     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrrh4_kubernetes-dashboard(a75ac11f-ac61-469e-8fa3-20312154a189)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4" podUID="a75ac11f-ac61-469e-8fa3-20312154a189"
	Oct 19 12:52:46 no-preload-561408 kubelet[706]: I1019 12:52:46.650974     706 scope.go:117] "RemoveContainer" containerID="6bb9fca8cb91e92c634a0fe57c08beb4f3fbe3bb2b9300a3533d146d5079c6f6"
	Oct 19 12:52:46 no-preload-561408 kubelet[706]: I1019 12:52:46.757261     706 scope.go:117] "RemoveContainer" containerID="6bb9fca8cb91e92c634a0fe57c08beb4f3fbe3bb2b9300a3533d146d5079c6f6"
	Oct 19 12:52:46 no-preload-561408 kubelet[706]: I1019 12:52:46.757517     706 scope.go:117] "RemoveContainer" containerID="df77f4d327ae80f60bf8d9478cc89af7ea33c43e5e8c28c0916303da469e7af3"
	Oct 19 12:52:46 no-preload-561408 kubelet[706]: E1019 12:52:46.757750     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrrh4_kubernetes-dashboard(a75ac11f-ac61-469e-8fa3-20312154a189)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4" podUID="a75ac11f-ac61-469e-8fa3-20312154a189"
	Oct 19 12:52:46 no-preload-561408 kubelet[706]: I1019 12:52:46.759069     706 scope.go:117] "RemoveContainer" containerID="020c85d371fff781f4756c6e8c355ddb7bd7f5a0962e17c03bbb71f5670fd818"
	Oct 19 12:52:55 no-preload-561408 kubelet[706]: I1019 12:52:55.102766     706 scope.go:117] "RemoveContainer" containerID="df77f4d327ae80f60bf8d9478cc89af7ea33c43e5e8c28c0916303da469e7af3"
	Oct 19 12:52:55 no-preload-561408 kubelet[706]: E1019 12:52:55.103034     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrrh4_kubernetes-dashboard(a75ac11f-ac61-469e-8fa3-20312154a189)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4" podUID="a75ac11f-ac61-469e-8fa3-20312154a189"
	Oct 19 12:53:01 no-preload-561408 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 12:53:01 no-preload-561408 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 12:53:01 no-preload-561408 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 12:53:01 no-preload-561408 systemd[1]: kubelet.service: Consumed 1.558s CPU time.
	
	
	==> kubernetes-dashboard [5799985fefa34297176d719d0444775a1e3245e7e4e852cb78f47add03751360] <==
	2025/10/19 12:52:22 Using namespace: kubernetes-dashboard
	2025/10/19 12:52:22 Using in-cluster config to connect to apiserver
	2025/10/19 12:52:22 Using secret token for csrf signing
	2025/10/19 12:52:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 12:52:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 12:52:22 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 12:52:22 Generating JWE encryption key
	2025/10/19 12:52:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 12:52:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 12:52:23 Initializing JWE encryption key from synchronized object
	2025/10/19 12:52:23 Creating in-cluster Sidecar client
	2025/10/19 12:52:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 12:52:23 Serving insecurely on HTTP port: 9090
	2025/10/19 12:52:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 12:52:22 Starting overwatch
	
	
	==> storage-provisioner [020c85d371fff781f4756c6e8c355ddb7bd7f5a0962e17c03bbb71f5670fd818] <==
	I1019 12:52:16.034546       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 12:52:46.036867       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ea70d04b3723054d0048f663e93576611305094165f6e15c68c81dddbc07caf0] <==
	I1019 12:52:46.815056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:52:46.822859       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:52:46.822912       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 12:52:46.825317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:50.280751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:54.541393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:58.140509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:01.201864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:04.225343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:04.230369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:53:04.230593       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:53:04.230700       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2a2da65-ffdf-4b5c-be11-c5e8f123ddea", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-561408_8e6b66e1-f2f3-4f5d-8761-25f3d8b329f5 became leader
	I1019 12:53:04.230798       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-561408_8e6b66e1-f2f3-4f5d-8761-25f3d8b329f5!
	W1019 12:53:04.232947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:04.238211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:53:04.331087       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-561408_8e6b66e1-f2f3-4f5d-8761-25f3d8b329f5!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-561408 -n no-preload-561408
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-561408 -n no-preload-561408: exit status 2 (391.529916ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-561408 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1019 12:53:05.534413  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/auto-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-561408
E1019 12:53:05.619486  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/auto-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:243: (dbg) docker inspect no-preload-561408:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583",
	        "Created": "2025-10-19T12:50:45.391801747Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 657809,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:52:06.431171295Z",
	            "FinishedAt": "2025-10-19T12:52:05.317296149Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583/hostname",
	        "HostsPath": "/var/lib/docker/containers/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583/hosts",
	        "LogPath": "/var/lib/docker/containers/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583/a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583-json.log",
	        "Name": "/no-preload-561408",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-561408:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-561408",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a52c329ec080a971856f3c95f08e997c153e5298b0d9def6460cdcc1dfcaa583",
	                "LowerDir": "/var/lib/docker/overlay2/6288165495fe743f3168f10ebe2b1785cd769498c22f951727a4dfaac7696c1b-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6288165495fe743f3168f10ebe2b1785cd769498c22f951727a4dfaac7696c1b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6288165495fe743f3168f10ebe2b1785cd769498c22f951727a4dfaac7696c1b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6288165495fe743f3168f10ebe2b1785cd769498c22f951727a4dfaac7696c1b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-561408",
	                "Source": "/var/lib/docker/volumes/no-preload-561408/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-561408",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-561408",
	                "name.minikube.sigs.k8s.io": "no-preload-561408",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d663382b594673719efdfb7fe4418752523ea860ef845dc7d933dce7316a70fb",
	            "SandboxKey": "/var/run/docker/netns/d663382b5946",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-561408": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:4a:01:6d:c4:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4f4a13c0b85cf53d05b4d14cdbcd2a320c735f036b2f0ba0e125d18fecb5483e",
	                    "EndpointID": "8bbe028a228bba720bf21f285bbd5a35394aa84a1b362b5a7a2870d444886ba4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-561408",
	                        "a52c329ec080"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-561408 -n no-preload-561408
E1019 12:53:05.781730  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/auto-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-561408 -n no-preload-561408: exit status 2 (368.956017ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-561408 logs -n 25
E1019 12:53:06.103088  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/auto-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:53:06.745396  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/auto-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-561408 logs -n 25: (1.454002277s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-931932 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-577062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ ssh     │ -p bridge-931932 sudo crio config                                                                                                                                                                                                             │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p bridge-931932                                                                                                                                                                                                                              │ bridge-931932                │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ stop    │ -p old-k8s-version-577062 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ delete  │ -p disable-driver-mounts-591165                                                                                                                                                                                                               │ disable-driver-mounts-591165 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-561408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ stop    │ -p no-preload-561408 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-577062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p no-preload-561408 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p embed-certs-123864 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-999693 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-123864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-999693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ image   │ old-k8s-version-577062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p old-k8s-version-577062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ no-preload-561408 image list --format=json                                                                                                                                                                                                    │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p no-preload-561408 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:52:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:52:46.925201  664256 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:52:46.925511  664256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:52:46.925521  664256 out.go:374] Setting ErrFile to fd 2...
	I1019 12:52:46.925526  664256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:52:46.925724  664256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:52:46.926177  664256 out.go:368] Setting JSON to false
	I1019 12:52:46.927476  664256 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9315,"bootTime":1760869052,"procs":360,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:52:46.927572  664256 start.go:141] virtualization: kvm guest
	I1019 12:52:46.929196  664256 out.go:179] * [default-k8s-diff-port-999693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:52:46.930756  664256 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:52:46.930801  664256 notify.go:220] Checking for updates...
	I1019 12:52:46.932758  664256 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:52:46.934048  664256 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:46.935192  664256 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:52:46.936498  664256 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:52:46.937762  664256 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:52:46.939394  664256 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:46.939848  664256 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:52:46.963683  664256 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:52:46.963772  664256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:52:47.023378  664256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 12:52:47.013329476 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:52:47.023535  664256 docker.go:318] overlay module found
	I1019 12:52:47.025269  664256 out.go:179] * Using the docker driver based on existing profile
	I1019 12:52:47.026568  664256 start.go:305] selected driver: docker
	I1019 12:52:47.026597  664256 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:47.026732  664256 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:52:47.027471  664256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:52:47.086363  664256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-19 12:52:47.076802932 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:52:47.086679  664256 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:47.086707  664256 cni.go:84] Creating CNI manager for ""
	I1019 12:52:47.086755  664256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:47.086787  664256 start.go:349] cluster config:
	{Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:47.088476  664256 out.go:179] * Starting "default-k8s-diff-port-999693" primary control-plane node in "default-k8s-diff-port-999693" cluster
	I1019 12:52:47.089564  664256 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:52:47.090727  664256 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:52:47.091742  664256 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:47.091773  664256 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:52:47.091781  664256 cache.go:58] Caching tarball of preloaded images
	I1019 12:52:47.091796  664256 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:52:47.091859  664256 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:52:47.091870  664256 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:52:47.091959  664256 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/config.json ...
	I1019 12:52:47.112105  664256 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:52:47.112128  664256 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:52:47.112142  664256 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:52:47.112172  664256 start.go:360] acquireMachinesLock for default-k8s-diff-port-999693: {Name:mke26e7439408c8adecea1bbb9344a31dd77b3c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:52:47.112226  664256 start.go:364] duration metric: took 36.455µs to acquireMachinesLock for "default-k8s-diff-port-999693"
	I1019 12:52:47.112245  664256 start.go:96] Skipping create...Using existing machine configuration
	I1019 12:52:47.112252  664256 fix.go:54] fixHost starting: 
	I1019 12:52:47.112490  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:47.129772  664256 fix.go:112] recreateIfNeeded on default-k8s-diff-port-999693: state=Stopped err=<nil>
	W1019 12:52:47.129802  664256 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 12:52:44.281015  663517 out.go:252] * Restarting existing docker container for "embed-certs-123864" ...
	I1019 12:52:44.281101  663517 cli_runner.go:164] Run: docker start embed-certs-123864
	I1019 12:52:44.526509  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:44.546310  663517 kic.go:430] container "embed-certs-123864" state is running.
	I1019 12:52:44.546720  663517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:52:44.565833  663517 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/config.json ...
	I1019 12:52:44.566069  663517 machine.go:93] provisionDockerMachine start ...
	I1019 12:52:44.566147  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:44.585705  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:44.585938  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:44.585949  663517 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:52:44.586499  663517 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58104->127.0.0.1:33490: read: connection reset by peer
	I1019 12:52:47.734652  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-123864
	
	I1019 12:52:47.734694  663517 ubuntu.go:182] provisioning hostname "embed-certs-123864"
	I1019 12:52:47.734763  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:47.754305  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:47.754574  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:47.754594  663517 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-123864 && echo "embed-certs-123864" | sudo tee /etc/hostname
	I1019 12:52:47.900303  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-123864
	
	I1019 12:52:47.900379  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:47.918114  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:47.918334  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:47.918355  663517 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-123864' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-123864/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-123864' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:52:48.051196  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:52:48.051226  663517 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:52:48.051276  663517 ubuntu.go:190] setting up certificates
	I1019 12:52:48.051294  663517 provision.go:84] configureAuth start
	I1019 12:52:48.051351  663517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:52:48.069277  663517 provision.go:143] copyHostCerts
	I1019 12:52:48.069333  663517 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:52:48.069349  663517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:52:48.069433  663517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:52:48.069546  663517 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:52:48.069557  663517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:52:48.069604  663517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:52:48.069660  663517 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:52:48.069667  663517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:52:48.069692  663517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:52:48.069741  663517 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.embed-certs-123864 san=[127.0.0.1 192.168.76.2 embed-certs-123864 localhost minikube]
	I1019 12:52:48.585780  663517 provision.go:177] copyRemoteCerts
	I1019 12:52:48.585838  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:52:48.585871  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:48.604279  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:48.702233  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:52:48.720721  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:52:48.738512  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:52:48.755942  663517 provision.go:87] duration metric: took 704.627825ms to configureAuth
	I1019 12:52:48.755977  663517 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:52:48.756154  663517 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:48.756278  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:48.775133  663517 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:48.775433  663517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I1019 12:52:48.775459  663517 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:52:49.061359  663517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:52:49.061389  663517 machine.go:96] duration metric: took 4.495303282s to provisionDockerMachine
	I1019 12:52:49.061401  663517 start.go:293] postStartSetup for "embed-certs-123864" (driver="docker")
	I1019 12:52:49.061414  663517 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:52:49.061511  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:52:49.061564  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:47.787829  657553 pod_ready.go:94] pod "coredns-66bc5c9577-pgxlp" is "Ready"
	I1019 12:52:47.787855  657553 pod_ready.go:86] duration metric: took 31.504899877s for pod "coredns-66bc5c9577-pgxlp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.789711  657553 pod_ready.go:83] waiting for pod "etcd-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.793406  657553 pod_ready.go:94] pod "etcd-no-preload-561408" is "Ready"
	I1019 12:52:47.793446  657553 pod_ready.go:86] duration metric: took 3.709623ms for pod "etcd-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.795182  657553 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.798678  657553 pod_ready.go:94] pod "kube-apiserver-no-preload-561408" is "Ready"
	I1019 12:52:47.798700  657553 pod_ready.go:86] duration metric: took 3.496714ms for pod "kube-apiserver-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.800596  657553 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:47.986813  657553 pod_ready.go:94] pod "kube-controller-manager-no-preload-561408" is "Ready"
	I1019 12:52:47.986842  657553 pod_ready.go:86] duration metric: took 186.220802ms for pod "kube-controller-manager-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.186670  657553 pod_ready.go:83] waiting for pod "kube-proxy-lppwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.586865  657553 pod_ready.go:94] pod "kube-proxy-lppwp" is "Ready"
	I1019 12:52:48.586892  657553 pod_ready.go:86] duration metric: took 400.184165ms for pod "kube-proxy-lppwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.785758  657553 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.186913  657553 pod_ready.go:94] pod "kube-scheduler-no-preload-561408" is "Ready"
	I1019 12:52:49.186953  657553 pod_ready.go:86] duration metric: took 401.160394ms for pod "kube-scheduler-no-preload-561408" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.186968  657553 pod_ready.go:40] duration metric: took 32.907293647s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:49.233509  657553 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:52:49.235163  657553 out.go:179] * Done! kubectl is now configured to use "no-preload-561408" cluster and "default" namespace by default
	W1019 12:52:47.528927  655442 pod_ready.go:104] pod "coredns-5dd5756b68-44mqv" is not "Ready", error: <nil>
	I1019 12:52:48.027407  655442 pod_ready.go:94] pod "coredns-5dd5756b68-44mqv" is "Ready"
	I1019 12:52:48.027445  655442 pod_ready.go:86] duration metric: took 40.505181601s for pod "coredns-5dd5756b68-44mqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.030160  655442 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.034042  655442 pod_ready.go:94] pod "etcd-old-k8s-version-577062" is "Ready"
	I1019 12:52:48.034071  655442 pod_ready.go:86] duration metric: took 3.888307ms for pod "etcd-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.036741  655442 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.040245  655442 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-577062" is "Ready"
	I1019 12:52:48.040263  655442 pod_ready.go:86] duration metric: took 3.503128ms for pod "kube-apiserver-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.042393  655442 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.225329  655442 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-577062" is "Ready"
	I1019 12:52:48.225354  655442 pod_ready.go:86] duration metric: took 182.944102ms for pod "kube-controller-manager-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.426194  655442 pod_ready.go:83] waiting for pod "kube-proxy-lhths" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:48.826171  655442 pod_ready.go:94] pod "kube-proxy-lhths" is "Ready"
	I1019 12:52:48.826194  655442 pod_ready.go:86] duration metric: took 399.973598ms for pod "kube-proxy-lhths" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.025864  655442 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.425023  655442 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-577062" is "Ready"
	I1019 12:52:49.425051  655442 pod_ready.go:86] duration metric: took 399.16124ms for pod "kube-scheduler-old-k8s-version-577062" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:52:49.425063  655442 pod_ready.go:40] duration metric: took 41.909017776s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:49.471302  655442 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1019 12:52:49.473153  655442 out.go:203] 
	W1019 12:52:49.474513  655442 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1019 12:52:49.475817  655442 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1019 12:52:49.477137  655442 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-577062" cluster and "default" namespace by default
	I1019 12:52:49.080598  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.176835  663517 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:52:49.180594  663517 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:52:49.180624  663517 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:52:49.180639  663517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:52:49.180704  663517 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:52:49.180802  663517 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:52:49.180915  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:52:49.188874  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:49.207471  663517 start.go:296] duration metric: took 146.052119ms for postStartSetup
	I1019 12:52:49.207569  663517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:52:49.207618  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:49.227005  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.322539  663517 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:52:49.327981  663517 fix.go:56] duration metric: took 5.066251838s for fixHost
	I1019 12:52:49.328013  663517 start.go:83] releasing machines lock for "embed-certs-123864", held for 5.066315254s
	I1019 12:52:49.328080  663517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-123864
	I1019 12:52:49.348437  663517 ssh_runner.go:195] Run: cat /version.json
	I1019 12:52:49.348488  663517 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:52:49.348506  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:49.348561  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:49.368071  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.368417  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:49.525163  663517 ssh_runner.go:195] Run: systemctl --version
	I1019 12:52:49.534330  663517 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:52:49.578043  663517 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:52:49.583920  663517 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:52:49.583993  663517 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:52:49.593384  663517 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:52:49.593406  663517 start.go:495] detecting cgroup driver to use...
	I1019 12:52:49.593463  663517 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:52:49.593523  663517 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:52:49.612003  663517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:52:49.626574  663517 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:52:49.626639  663517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:52:49.641058  663517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:52:49.653880  663517 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:52:49.736282  663517 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:52:49.834377  663517 docker.go:234] disabling docker service ...
	I1019 12:52:49.834478  663517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:52:49.850898  663517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:52:49.864746  663517 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:52:49.939108  663517 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:52:50.014260  663517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:52:50.026706  663517 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:52:50.040656  663517 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:52:50.040725  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.049794  663517 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:52:50.049857  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.058814  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.067348  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.075837  663517 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:52:50.083843  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.092439  663517 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.100689  663517 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:50.109083  663517 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:52:50.116037  663517 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:52:50.123017  663517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:50.196214  663517 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:52:50.304544  663517 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:52:50.304601  663517 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:52:50.308678  663517 start.go:563] Will wait 60s for crictl version
	I1019 12:52:50.308736  663517 ssh_runner.go:195] Run: which crictl
	I1019 12:52:50.312585  663517 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:52:50.336989  663517 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:52:50.337082  663517 ssh_runner.go:195] Run: crio --version
	I1019 12:52:50.365185  663517 ssh_runner.go:195] Run: crio --version
	I1019 12:52:50.395636  663517 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:52:50.396988  663517 cli_runner.go:164] Run: docker network inspect embed-certs-123864 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:52:50.414563  663517 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1019 12:52:50.418760  663517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:50.429343  663517 kubeadm.go:883] updating cluster {Name:embed-certs-123864 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:52:50.429499  663517 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:50.429554  663517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:50.463514  663517 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:50.463537  663517 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:52:50.463585  663517 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:50.489852  663517 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:50.489884  663517 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:52:50.489897  663517 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1019 12:52:50.490024  663517 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-123864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:52:50.490091  663517 ssh_runner.go:195] Run: crio config
	I1019 12:52:50.540351  663517 cni.go:84] Creating CNI manager for ""
	I1019 12:52:50.540379  663517 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:50.540402  663517 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:52:50.540455  663517 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-123864 NodeName:embed-certs-123864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:52:50.540626  663517 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-123864"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:52:50.540708  663517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:52:50.548975  663517 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:52:50.549037  663517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:52:50.556535  663517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1019 12:52:50.569078  663517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:52:50.582078  663517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1019 12:52:50.594598  663517 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:52:50.598683  663517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:50.609655  663517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:50.691984  663517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:50.714791  663517 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864 for IP: 192.168.76.2
	I1019 12:52:50.714813  663517 certs.go:195] generating shared ca certs ...
	I1019 12:52:50.714830  663517 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:50.714977  663517 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:52:50.715024  663517 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:52:50.715035  663517 certs.go:257] generating profile certs ...
	I1019 12:52:50.715113  663517 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/client.key
	I1019 12:52:50.715153  663517 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key.ef142c6b
	I1019 12:52:50.715189  663517 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.key
	I1019 12:52:50.715286  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:52:50.715311  663517 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:52:50.715320  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:52:50.715340  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:52:50.715362  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:52:50.715384  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:52:50.715443  663517 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:50.716041  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:52:50.735271  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:52:50.755214  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:52:50.777014  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:52:50.800199  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 12:52:50.821324  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:52:50.839279  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:52:50.856965  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/embed-certs-123864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:52:50.874445  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:52:50.891496  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:52:50.908559  663517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:52:50.927767  663517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:52:50.941573  663517 ssh_runner.go:195] Run: openssl version
	I1019 12:52:50.947724  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:52:50.956196  663517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:52:50.959953  663517 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:52:50.960001  663517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:52:50.995897  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:52:51.005114  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:52:51.013652  663517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:51.017476  663517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:51.017521  663517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:51.051306  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:52:51.059843  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:52:51.068625  663517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:52:51.072364  663517 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:52:51.072434  663517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:52:51.106768  663517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:52:51.115327  663517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:52:51.119266  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:52:51.155239  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:52:51.191302  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:52:51.231935  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:52:51.281478  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:52:51.335604  663517 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:52:51.389971  663517 kubeadm.go:400] StartCluster: {Name:embed-certs-123864 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-123864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:51.390086  663517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:52:51.390161  663517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:52:51.427193  663517 cri.go:89] found id: "0d6bd37e74ce4fd54de1cf8e27fcb93f0da4eae636f80ecf509c242bba0ab6b4"
	I1019 12:52:51.427217  663517 cri.go:89] found id: "2948778c0277b5d716b5581d32565f17755bd979469128c13d911b54b47927ea"
	I1019 12:52:51.427222  663517 cri.go:89] found id: "f0fd8fcb3c6d87abb5a73bdbe32675387cdf9b39fb23cc80e3f9fcee156b57fc"
	I1019 12:52:51.427225  663517 cri.go:89] found id: "ce30ef8a95f35deb3f080b7ea813df6a93693594ac7959d6e3a0b79159f36e25"
	I1019 12:52:51.427228  663517 cri.go:89] found id: ""
	I1019 12:52:51.427267  663517 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:52:51.440120  663517 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:52:51Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:52:51.440220  663517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:52:51.449733  663517 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:52:51.449753  663517 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:52:51.449805  663517 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:52:51.458169  663517 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:52:51.459058  663517 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-123864" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:51.459546  663517 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-351705/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-123864" cluster setting kubeconfig missing "embed-certs-123864" context setting]
	I1019 12:52:51.460311  663517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:51.462264  663517 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:52:51.470636  663517 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1019 12:52:51.470666  663517 kubeadm.go:601] duration metric: took 20.906449ms to restartPrimaryControlPlane
	I1019 12:52:51.470676  663517 kubeadm.go:402] duration metric: took 80.715661ms to StartCluster
	I1019 12:52:51.470710  663517 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:51.470784  663517 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:51.472656  663517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:51.472905  663517 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:52:51.473029  663517 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:52:51.473122  663517 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-123864"
	I1019 12:52:51.473142  663517 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-123864"
	W1019 12:52:51.473150  663517 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:52:51.473154  663517 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:51.473167  663517 addons.go:69] Setting dashboard=true in profile "embed-certs-123864"
	I1019 12:52:51.473186  663517 addons.go:238] Setting addon dashboard=true in "embed-certs-123864"
	I1019 12:52:51.473190  663517 host.go:66] Checking if "embed-certs-123864" exists ...
	W1019 12:52:51.473196  663517 addons.go:247] addon dashboard should already be in state true
	I1019 12:52:51.473194  663517 addons.go:69] Setting default-storageclass=true in profile "embed-certs-123864"
	I1019 12:52:51.473226  663517 host.go:66] Checking if "embed-certs-123864" exists ...
	I1019 12:52:51.473225  663517 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-123864"
	I1019 12:52:51.473582  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.473805  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.473960  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.476597  663517 out.go:179] * Verifying Kubernetes components...
	I1019 12:52:51.479247  663517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:51.500794  663517 addons.go:238] Setting addon default-storageclass=true in "embed-certs-123864"
	W1019 12:52:51.500880  663517 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:52:51.500970  663517 host.go:66] Checking if "embed-certs-123864" exists ...
	I1019 12:52:51.501574  663517 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:52:51.502354  663517 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:52:51.503126  663517 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 12:52:51.503854  663517 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:51.503891  663517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:52:51.503970  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:51.505618  663517 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 12:52:47.131514  664256 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-999693" ...
	I1019 12:52:47.131575  664256 cli_runner.go:164] Run: docker start default-k8s-diff-port-999693
	I1019 12:52:47.384629  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:47.402936  664256 kic.go:430] container "default-k8s-diff-port-999693" state is running.
	I1019 12:52:47.403379  664256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-999693
	I1019 12:52:47.423463  664256 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/config.json ...
	I1019 12:52:47.423767  664256 machine.go:93] provisionDockerMachine start ...
	I1019 12:52:47.423874  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:47.444517  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:47.444842  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:47.444866  664256 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:52:47.445518  664256 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41262->127.0.0.1:33495: read: connection reset by peer
	I1019 12:52:50.583537  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-999693
	
	I1019 12:52:50.583567  664256 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-999693"
	I1019 12:52:50.583650  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:50.604186  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:50.604410  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:50.604444  664256 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-999693 && echo "default-k8s-diff-port-999693" | sudo tee /etc/hostname
	I1019 12:52:50.751627  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-999693
	
	I1019 12:52:50.751775  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:50.773964  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:50.774248  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:50.774277  664256 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-999693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-999693/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-999693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:52:50.913745  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:52:50.913786  664256 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:52:50.913836  664256 ubuntu.go:190] setting up certificates
	I1019 12:52:50.913870  664256 provision.go:84] configureAuth start
	I1019 12:52:50.913952  664256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-999693
	I1019 12:52:50.934395  664256 provision.go:143] copyHostCerts
	I1019 12:52:50.934470  664256 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:52:50.934487  664256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:52:50.934554  664256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:52:50.934664  664256 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:52:50.934673  664256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:52:50.934711  664256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:52:50.934808  664256 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:52:50.934820  664256 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:52:50.934849  664256 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:52:50.934971  664256 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-999693 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-999693 localhost minikube]
	I1019 12:52:51.181197  664256 provision.go:177] copyRemoteCerts
	I1019 12:52:51.181259  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:52:51.181302  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.200908  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:51.299582  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:52:51.321298  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1019 12:52:51.347057  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 12:52:51.372503  664256 provision.go:87] duration metric: took 458.610195ms to configureAuth
	I1019 12:52:51.372536  664256 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:52:51.372758  664256 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:51.372944  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.397897  664256 main.go:141] libmachine: Using SSH client type: native
	I1019 12:52:51.398221  664256 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1019 12:52:51.398253  664256 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:52:51.787740  664256 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:52:51.787770  664256 machine.go:96] duration metric: took 4.36398321s to provisionDockerMachine
	I1019 12:52:51.787784  664256 start.go:293] postStartSetup for "default-k8s-diff-port-999693" (driver="docker")
	I1019 12:52:51.787799  664256 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:52:51.787891  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:52:51.787950  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.813780  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:51.920668  664256 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:52:51.925324  664256 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:52:51.925357  664256 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:52:51.925370  664256 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:52:51.925448  664256 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:52:51.925552  664256 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:52:51.925688  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:52:51.936356  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:51.957175  664256 start.go:296] duration metric: took 169.373131ms for postStartSetup
	I1019 12:52:51.957258  664256 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:52:51.957327  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:51.980799  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:52.081065  664256 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:52:52.087117  664256 fix.go:56] duration metric: took 4.974857045s for fixHost
	I1019 12:52:52.087152  664256 start.go:83] releasing machines lock for "default-k8s-diff-port-999693", held for 4.974914543s
	I1019 12:52:52.087228  664256 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-999693
	I1019 12:52:52.111457  664256 ssh_runner.go:195] Run: cat /version.json
	I1019 12:52:52.111517  664256 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:52:52.111598  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:52.111518  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:52.137014  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:52.137025  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:52.314908  664256 ssh_runner.go:195] Run: systemctl --version
	I1019 12:52:52.323209  664256 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:52:52.366367  664256 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:52:52.371765  664256 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:52:52.371833  664256 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:52:52.381186  664256 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:52:52.381210  664256 start.go:495] detecting cgroup driver to use...
	I1019 12:52:52.381243  664256 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:52:52.381290  664256 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:52:52.399404  664256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:52:52.414594  664256 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:52:52.414655  664256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:52:52.432231  664256 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:52:52.447748  664256 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:52:52.544771  664256 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:52:52.640880  664256 docker.go:234] disabling docker service ...
	I1019 12:52:52.640958  664256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:52:52.658680  664256 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:52:52.672412  664256 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:52:52.769106  664256 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:52:52.884868  664256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:52:52.906499  664256 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:52:52.933714  664256 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:52:52.933784  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.948702  664256 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:52:52.948841  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.962681  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.976376  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:52.993092  664256 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:52:53.001841  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:53.017733  664256 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:53.032955  664256 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:52:53.050801  664256 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:52:53.067622  664256 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:52:53.083829  664256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:53.206267  664256 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:52:53.349143  664256 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:52:53.349212  664256 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:52:53.355228  664256 start.go:563] Will wait 60s for crictl version
	I1019 12:52:53.355416  664256 ssh_runner.go:195] Run: which crictl
	I1019 12:52:53.361171  664256 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:52:53.398217  664256 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:52:53.398309  664256 ssh_runner.go:195] Run: crio --version
	I1019 12:52:53.428293  664256 ssh_runner.go:195] Run: crio --version
	I1019 12:52:53.468822  664256 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:52:51.507351  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 12:52:51.507377  663517 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:52:51.507478  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:51.528518  663517 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:51.528547  663517 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:51.528609  663517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:52:51.529319  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:51.537540  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:51.560844  663517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:52:51.652064  663517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:51.659469  663517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:51.665965  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:52:51.665989  663517 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:52:51.672138  663517 node_ready.go:35] waiting up to 6m0s for node "embed-certs-123864" to be "Ready" ...
	I1019 12:52:51.685068  663517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:51.686285  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:52:51.686312  663517 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:52:51.706556  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:52:51.706583  663517 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:52:51.726874  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:52:51.726898  663517 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:52:51.745384  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:52:51.745410  663517 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:52:51.761707  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:52:51.761733  663517 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:52:51.779101  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:52:51.779128  663517 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:52:51.797377  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:52:51.797405  663517 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:52:51.812263  663517 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:51.812286  663517 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:52:51.829889  663517 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:53.072809  663517 node_ready.go:49] node "embed-certs-123864" is "Ready"
	I1019 12:52:53.072851  663517 node_ready.go:38] duration metric: took 1.400666832s for node "embed-certs-123864" to be "Ready" ...
	I1019 12:52:53.072871  663517 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:53.072920  663517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:53.700121  663517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.040605714s)
	I1019 12:52:53.700176  663517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.01507119s)
	I1019 12:52:53.700245  663517 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.870328808s)
	I1019 12:52:53.700294  663517 api_server.go:72] duration metric: took 2.22734911s to wait for apiserver process to appear ...
	I1019 12:52:53.700347  663517 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:53.700370  663517 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:53.702124  663517 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-123864 addons enable metrics-server
	
	I1019 12:52:53.707464  663517 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:53.707492  663517 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:53.714665  663517 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 12:52:53.716036  663517 addons.go:514] duration metric: took 2.243010209s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 12:52:53.470131  664256 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-999693 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:52:53.492572  664256 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1019 12:52:53.498533  664256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:53.511548  664256 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:52:53.511704  664256 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:52:53.511776  664256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:53.554672  664256 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:53.554693  664256 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:52:53.554740  664256 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:52:53.588812  664256 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:52:53.588842  664256 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:52:53.588852  664256 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1019 12:52:53.588996  664256 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-999693 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:52:53.589088  664256 ssh_runner.go:195] Run: crio config
	I1019 12:52:53.643663  664256 cni.go:84] Creating CNI manager for ""
	I1019 12:52:53.643692  664256 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:52:53.643715  664256 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 12:52:53.643745  664256 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-999693 NodeName:default-k8s-diff-port-999693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:52:53.643935  664256 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-999693"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:52:53.644016  664256 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:52:53.652520  664256 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:52:53.652594  664256 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:52:53.660846  664256 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1019 12:52:53.674227  664256 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:52:53.687240  664256 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1019 12:52:53.700930  664256 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:52:53.705067  664256 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:52:53.717166  664256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:53.801260  664256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:53.825321  664256 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693 for IP: 192.168.85.2
	I1019 12:52:53.825347  664256 certs.go:195] generating shared ca certs ...
	I1019 12:52:53.825370  664256 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:53.825553  664256 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:52:53.825597  664256 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:52:53.825608  664256 certs.go:257] generating profile certs ...
	I1019 12:52:53.825725  664256 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/client.key
	I1019 12:52:53.825803  664256 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/apiserver.key.8ef1e1bb
	I1019 12:52:53.825855  664256 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/proxy-client.key
	I1019 12:52:53.826004  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:52:53.826045  664256 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:52:53.826057  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:52:53.826084  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:52:53.826120  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:52:53.826159  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:52:53.826218  664256 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:52:53.827044  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:52:53.850305  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:52:53.874056  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:52:53.900302  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:52:53.924868  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1019 12:52:53.943707  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 12:52:53.960778  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:52:53.977601  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/default-k8s-diff-port-999693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 12:52:53.994887  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:52:54.012296  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:52:54.038626  664256 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:52:54.063497  664256 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:52:54.079249  664256 ssh_runner.go:195] Run: openssl version
	I1019 12:52:54.086057  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:52:54.097143  664256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:54.102203  664256 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:54.102259  664256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:52:54.158908  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:52:54.169449  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:52:54.182754  664256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:52:54.188730  664256 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:52:54.188802  664256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:52:54.244383  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:52:54.254644  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:52:54.263550  664256 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:52:54.267515  664256 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:52:54.267578  664256 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:52:54.304899  664256 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:52:54.313985  664256 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:52:54.317801  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:52:54.360081  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:52:54.405761  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:52:54.464318  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:52:54.525359  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:52:54.563734  664256 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:52:54.608045  664256 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-999693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-999693 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:52:54.608169  664256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:52:54.608231  664256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:52:54.649470  664256 cri.go:89] found id: "7387a9f9039b6043f8b791c29478a2e313a9c1d07804c55f3bd42e18a02230e4"
	I1019 12:52:54.649495  664256 cri.go:89] found id: "dc93d8bd2fb474180164b7ca4cdad0cbca1bb12056f2ec0109f0fdd3eaff8e74"
	I1019 12:52:54.649501  664256 cri.go:89] found id: "386f63ea17ece706be504558369a24b364237cf65e614304f2e3a200660b929a"
	I1019 12:52:54.649506  664256 cri.go:89] found id: "3d2737d35156d50ddf2521cf937a27d4a3882183759b5bedf15ae21799bc69b0"
	I1019 12:52:54.649511  664256 cri.go:89] found id: ""
	I1019 12:52:54.649557  664256 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:52:54.665837  664256 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:52:54Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:52:54.665908  664256 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:52:54.677684  664256 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:52:54.677708  664256 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:52:54.677757  664256 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:52:54.687556  664256 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:52:54.689468  664256 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-999693" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:54.690566  664256 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-351705/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-999693" cluster setting kubeconfig missing "default-k8s-diff-port-999693" context setting]
	I1019 12:52:54.691940  664256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:54.694639  664256 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:52:54.705918  664256 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1019 12:52:54.705949  664256 kubeadm.go:601] duration metric: took 28.235813ms to restartPrimaryControlPlane
	I1019 12:52:54.705960  664256 kubeadm.go:402] duration metric: took 97.926007ms to StartCluster
	I1019 12:52:54.705977  664256 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:54.706033  664256 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:52:54.708821  664256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:52:54.709325  664256 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:52:54.709463  664256 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-999693"
	I1019 12:52:54.709490  664256 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-999693"
	W1019 12:52:54.709502  664256 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:52:54.709534  664256 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:52:54.709617  664256 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:52:54.709548  664256 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:52:54.709808  664256 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-999693"
	I1019 12:52:54.710141  664256 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-999693"
	W1019 12:52:54.710161  664256 addons.go:247] addon dashboard should already be in state true
	I1019 12:52:54.710191  664256 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:52:54.711868  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.712514  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.709821  664256 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-999693"
	I1019 12:52:54.713522  664256 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-999693"
	I1019 12:52:54.713860  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.714625  664256 out.go:179] * Verifying Kubernetes components...
	I1019 12:52:54.715871  664256 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:52:54.746297  664256 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 12:52:54.747517  664256 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:52:54.747552  664256 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 12:52:54.749165  664256 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-999693"
	I1019 12:52:54.749177  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	W1019 12:52:54.749186  664256 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:52:54.749191  664256 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:52:54.749216  664256 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:52:54.749232  664256 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:54.749245  664256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:52:54.749256  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:54.749306  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:54.749711  664256 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:52:54.783580  664256 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:54.783608  664256 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:52:54.783676  664256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:52:54.787579  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:54.788172  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:54.817481  664256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:52:54.916555  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:52:54.916589  664256 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:52:54.918652  664256 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:52:54.921391  664256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:52:54.939730  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:52:54.939840  664256 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:52:54.940294  664256 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-999693" to be "Ready" ...
	I1019 12:52:54.941172  664256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:52:54.960699  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:52:54.960783  664256 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:52:54.976260  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:52:54.976341  664256 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:52:54.996375  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:52:54.996401  664256 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:52:55.017050  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:52:55.017079  664256 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:52:55.033603  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:52:55.033632  664256 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:52:55.048007  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:52:55.048032  664256 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:52:55.063077  664256 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:55.063102  664256 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:52:55.078449  664256 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:52:56.495857  664256 node_ready.go:49] node "default-k8s-diff-port-999693" is "Ready"
	I1019 12:52:56.495897  664256 node_ready.go:38] duration metric: took 1.555549648s for node "default-k8s-diff-port-999693" to be "Ready" ...
	I1019 12:52:56.495915  664256 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:52:56.495982  664256 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:52:57.096998  664256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.175567368s)
	I1019 12:52:57.097030  664256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.155826931s)
	I1019 12:52:57.097189  664256 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.018704195s)
	I1019 12:52:57.097307  664256 api_server.go:72] duration metric: took 2.387607096s to wait for apiserver process to appear ...
	I1019 12:52:57.097327  664256 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:52:57.097348  664256 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:57.100178  664256 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-999693 addons enable metrics-server
	
	I1019 12:52:57.102943  664256 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:57.102968  664256 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:57.105461  664256 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 12:52:54.200764  663517 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:54.206405  663517 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:54.206480  663517 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:54.701368  663517 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1019 12:52:54.709189  663517 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1019 12:52:54.710714  663517 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:54.710735  663517 api_server.go:131] duration metric: took 1.010380706s to wait for apiserver health ...
	I1019 12:52:54.710745  663517 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:54.721732  663517 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:54.721787  663517 system_pods.go:61] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:54.721804  663517 system_pods.go:61] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:54.721814  663517 system_pods.go:61] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:54.721826  663517 system_pods.go:61] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:54.721838  663517 system_pods.go:61] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:54.721893  663517 system_pods.go:61] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:54.721905  663517 system_pods.go:61] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:54.721926  663517 system_pods.go:61] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:54.721934  663517 system_pods.go:74] duration metric: took 11.182501ms to wait for pod list to return data ...
	I1019 12:52:54.721949  663517 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:54.728320  663517 default_sa.go:45] found service account: "default"
	I1019 12:52:54.728404  663517 default_sa.go:55] duration metric: took 6.446433ms for default service account to be created ...
	I1019 12:52:54.728450  663517 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:54.742048  663517 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:54.742087  663517 system_pods.go:89] "coredns-66bc5c9577-bw9l4" [155bf170-e0c9-4cbb-a5a8-3210902a76d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:54.742747  663517 system_pods.go:89] "etcd-embed-certs-123864" [3ae21280-dd15-40f8-9ee7-817da6d75122] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:54.743381  663517 system_pods.go:89] "kindnet-zkvs7" [39c8c6a5-3b67-4e28-895b-65d5e43fbc5c] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:52:54.743410  663517 system_pods.go:89] "kube-apiserver-embed-certs-123864" [b225d42f-fbe3-4d25-b599-240b6d2e08a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:54.743900  663517 system_pods.go:89] "kube-controller-manager-embed-certs-123864" [8fa28ffd-f8cd-453d-9f1e-7323717159dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:54.744078  663517 system_pods.go:89] "kube-proxy-gvrcz" [3b96feeb-3261-4834-945d-8e8048490377] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:52:54.744455  663517 system_pods.go:89] "kube-scheduler-embed-certs-123864" [b156a6c9-478b-4c74-93d9-76fa96deff9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:54.744805  663517 system_pods.go:89] "storage-provisioner" [55836f6b-0761-4d80-9bb6-6b937954a401] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 12:52:54.744821  663517 system_pods.go:126] duration metric: took 16.360253ms to wait for k8s-apps to be running ...
	I1019 12:52:54.745172  663517 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:54.745631  663517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:54.769658  663517 system_svc.go:56] duration metric: took 24.811398ms WaitForService to wait for kubelet
	I1019 12:52:54.769727  663517 kubeadm.go:586] duration metric: took 3.296760449s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:54.769750  663517 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:54.773633  663517 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:54.773745  663517 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:54.773776  663517 node_conditions.go:105] duration metric: took 4.019851ms to run NodePressure ...
	I1019 12:52:54.773995  663517 start.go:241] waiting for startup goroutines ...
	I1019 12:52:54.774026  663517 start.go:246] waiting for cluster config update ...
	I1019 12:52:54.774043  663517 start.go:255] writing updated cluster config ...
	I1019 12:52:54.774837  663517 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:54.781544  663517 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:54.790057  663517 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:52:56.796654  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:52:57.109849  664256 addons.go:514] duration metric: took 2.400528693s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 12:52:57.598353  664256 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:57.604765  664256 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:52:57.604814  664256 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:52:58.098137  664256 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1019 12:52:58.103228  664256 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1019 12:52:58.104494  664256 api_server.go:141] control plane version: v1.34.1
	I1019 12:52:58.104523  664256 api_server.go:131] duration metric: took 1.007188483s to wait for apiserver health ...
	I1019 12:52:58.104535  664256 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:52:58.108083  664256 system_pods.go:59] 8 kube-system pods found
	I1019 12:52:58.108110  664256 system_pods.go:61] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:58.108118  664256 system_pods.go:61] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:58.108124  664256 system_pods.go:61] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:58.108130  664256 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:58.108142  664256 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:58.108150  664256 system_pods.go:61] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:58.108159  664256 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:58.108168  664256 system_pods.go:61] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Running
	I1019 12:52:58.108179  664256 system_pods.go:74] duration metric: took 3.637436ms to wait for pod list to return data ...
	I1019 12:52:58.108192  664256 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:52:58.110578  664256 default_sa.go:45] found service account: "default"
	I1019 12:52:58.110596  664256 default_sa.go:55] duration metric: took 2.39546ms for default service account to be created ...
	I1019 12:52:58.110604  664256 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 12:52:58.113444  664256 system_pods.go:86] 8 kube-system pods found
	I1019 12:52:58.113473  664256 system_pods.go:89] "coredns-66bc5c9577-hftjp" [53c60896-3b7d-4f84-bc9d-6eb228b511b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 12:52:58.113485  664256 system_pods.go:89] "etcd-default-k8s-diff-port-999693" [8b0e4a81-ecc1-4b52-810b-2b54b54337ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:52:58.113496  664256 system_pods.go:89] "kindnet-79bv6" [6f614301-5daf-43cc-9013-94bf6d7d161a] Running
	I1019 12:52:58.113516  664256 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-999693" [0e81ff95-bf7d-41ea-9a76-5d2aaff376aa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:52:58.113527  664256 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-999693" [32ae675f-d90f-410c-9d9f-13173a523fcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:52:58.113534  664256 system_pods.go:89] "kube-proxy-cjxjt" [662f6b7b-b302-4d2c-b6b0-c3def258b315] Running
	I1019 12:52:58.113539  664256 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-999693" [69b2077a-fd77-42c0-8a24-8bc6add7f164] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:52:58.113545  664256 system_pods.go:89] "storage-provisioner" [1446462f-3c0a-4cf9-b8a5-7b8096844759] Running
	I1019 12:52:58.113553  664256 system_pods.go:126] duration metric: took 2.943742ms to wait for k8s-apps to be running ...
	I1019 12:52:58.113563  664256 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 12:52:58.113613  664256 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:52:58.128579  664256 system_svc.go:56] duration metric: took 15.004824ms WaitForService to wait for kubelet
	I1019 12:52:58.128609  664256 kubeadm.go:586] duration metric: took 3.418911937s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 12:52:58.128632  664256 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:52:58.131784  664256 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:52:58.131819  664256 node_conditions.go:123] node cpu capacity is 8
	I1019 12:52:58.131832  664256 node_conditions.go:105] duration metric: took 3.194851ms to run NodePressure ...
	I1019 12:52:58.131843  664256 start.go:241] waiting for startup goroutines ...
	I1019 12:52:58.131850  664256 start.go:246] waiting for cluster config update ...
	I1019 12:52:58.131862  664256 start.go:255] writing updated cluster config ...
	I1019 12:52:58.132300  664256 ssh_runner.go:195] Run: rm -f paused
	I1019 12:52:58.136574  664256 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:52:58.140912  664256 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 12:53:00.147567  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:52:59.295731  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:01.298842  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:03.300380  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.767103158Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9dcdb517ab0da35aa313d6a637ad2984679c0bfbe61b4cfe2348233171c54c2f/merged/etc/passwd: no such file or directory"
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.767138827Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9dcdb517ab0da35aa313d6a637ad2984679c0bfbe61b4cfe2348233171c54c2f/merged/etc/group: no such file or directory"
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.768499927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.771385476Z" level=info msg="Removed container 6bb9fca8cb91e92c634a0fe57c08beb4f3fbe3bb2b9300a3533d146d5079c6f6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4/dashboard-metrics-scraper" id=cefce425-1dfb-449e-b495-62a084c199d9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.799156002Z" level=info msg="Created container ea70d04b3723054d0048f663e93576611305094165f6e15c68c81dddbc07caf0: kube-system/storage-provisioner/storage-provisioner" id=28d03808-4d95-44da-8b4d-eb02953c93a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.79992987Z" level=info msg="Starting container: ea70d04b3723054d0048f663e93576611305094165f6e15c68c81dddbc07caf0" id=7731cb81-9988-443b-88fe-82145540a3f7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:52:46 no-preload-561408 crio[557]: time="2025-10-19T12:52:46.801841005Z" level=info msg="Started container" PID=1694 containerID=ea70d04b3723054d0048f663e93576611305094165f6e15c68c81dddbc07caf0 description=kube-system/storage-provisioner/storage-provisioner id=7731cb81-9988-443b-88fe-82145540a3f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e414a2747b0a0810e9f18b34d6dcc3a19cfd31694df3baf68c8c127c15fa677e
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.425212745Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.430149094Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.430181533Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.43020865Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.434191101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.434229779Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.434252989Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.438374145Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.438439202Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.438469422Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.442810019Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.442839199Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.442864037Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.454839725Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.45489265Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.455092725Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.465837099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:52:56 no-preload-561408 crio[557]: time="2025-10-19T12:52:56.46589081Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	ea70d04b37230       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   e414a2747b0a0       storage-provisioner                          kube-system
	df77f4d327ae8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   fadf510e5eab1       dashboard-metrics-scraper-6ffb444bf9-lrrh4   kubernetes-dashboard
	5799985fefa34       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   1bdf8a843608a       kubernetes-dashboard-855c9754f9-hm7lm        kubernetes-dashboard
	71ca7ab6923e9       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   49e3167c49b25       busybox                                      default
	2f726a5e2a456       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   01c9fc5a5722d       coredns-66bc5c9577-pgxlp                     kube-system
	e4ca43f4f6043       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   d207181e75c7c       kube-proxy-lppwp                             kube-system
	020c85d371fff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   e414a2747b0a0       storage-provisioner                          kube-system
	063e2ede2fb5d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   e66b9326f36ac       kindnet-kq4cq                                kube-system
	6c259b4325350       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   f369fdec1c4c8       etcd-no-preload-561408                       kube-system
	f7b8547c0e922       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   a9f277219620e       kube-scheduler-no-preload-561408             kube-system
	9090a5b4e67c9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   9ad86104630e9       kube-controller-manager-no-preload-561408    kube-system
	01ed9d93f2579       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   9a254410b4804       kube-apiserver-no-preload-561408             kube-system
	
	
	==> coredns [2f726a5e2a456524d90c9f4cabeb7cf0ba8039f3ba6d55bd262c7f75669065fb] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37439 - 58452 "HINFO IN 3512829246426565864.6072171021658419229. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053714122s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-561408
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-561408
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=no-preload-561408
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_51_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:51:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-561408
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:52:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:52:45 +0000   Sun, 19 Oct 2025 12:51:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:52:45 +0000   Sun, 19 Oct 2025 12:51:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:52:45 +0000   Sun, 19 Oct 2025 12:51:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:52:45 +0000   Sun, 19 Oct 2025 12:52:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-561408
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                7f18081e-0db1-4ca2-b083-85e9821fdde2
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-pgxlp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-no-preload-561408                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-kq4cq                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-561408              250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-561408     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-lppwp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-561408              100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-lrrh4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hm7lm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node no-preload-561408 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node no-preload-561408 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node no-preload-561408 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node no-preload-561408 event: Registered Node no-preload-561408 in Controller
	  Normal  NodeReady                92s                kubelet          Node no-preload-561408 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node no-preload-561408 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node no-preload-561408 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node no-preload-561408 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node no-preload-561408 event: Registered Node no-preload-561408 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [6c259b4325350a6198e9a1d8d0eac556ea213104568525890a93d7a828893ce4] <==
	{"level":"warn","ts":"2025-10-19T12:52:14.074335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.081538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.089948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.096000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.101900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.108262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.115207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.122313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.131132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.137665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.145548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.152694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.158935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.166553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.172781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.178945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.187308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.193313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.199623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.206839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.212800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.219099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.237924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.245090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:14.293550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50884","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:53:07 up  2:35,  0 user,  load average: 4.86, 4.85, 3.12
	Linux no-preload-561408 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [063e2ede2fb5d7efd8c012dc8a326dea1655039e3c63f156dbcc015d3aa6d400] <==
	I1019 12:52:16.224141       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:52:16.224407       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1019 12:52:16.224647       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:52:16.224671       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:52:16.224708       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:52:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:52:16.424305       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:52:16.424330       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:52:16.424344       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:52:16.424748       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1019 12:52:46.424562       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1019 12:52:46.424686       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1019 12:52:46.424874       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1019 12:52:46.425024       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1019 12:52:48.025245       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:52:48.025282       1 metrics.go:72] Registering metrics
	I1019 12:52:48.025367       1 controller.go:711] "Syncing nftables rules"
	I1019 12:52:56.424877       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1019 12:52:56.424954       1 main.go:301] handling current node
	I1019 12:53:06.432538       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1019 12:53:06.432579       1 main.go:301] handling current node
	
	
	==> kube-apiserver [01ed9d93f2579a1ea122d6b57e30a1236b2a3f66e97860cfecc6148cae01a115] <==
	I1019 12:52:14.774773       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 12:52:14.774791       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 12:52:14.774833       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 12:52:14.775215       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 12:52:14.775243       1 aggregator.go:171] initial CRD sync complete...
	I1019 12:52:14.775255       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 12:52:14.775262       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 12:52:14.775268       1 cache.go:39] Caches are synced for autoregister controller
	I1019 12:52:14.779914       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1019 12:52:14.780058       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 12:52:14.784587       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 12:52:14.784673       1 policy_source.go:240] refreshing policies
	I1019 12:52:14.828036       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:52:14.829009       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:52:15.046328       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:52:15.074367       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:52:15.092776       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:52:15.098538       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:52:15.105197       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:52:15.135864       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.86.91"}
	I1019 12:52:15.145605       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.86.221"}
	I1019 12:52:15.677144       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:52:18.527916       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:52:18.625494       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:52:18.727129       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [9090a5b4e67c95d31bf16d2ca089106db1a0761e43d712e00a8bf33bc963353d] <==
	I1019 12:52:18.172599       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-561408"
	I1019 12:52:18.172658       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 12:52:18.172779       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:52:18.172888       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 12:52:18.172946       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:52:18.173260       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 12:52:18.173308       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:52:18.173387       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 12:52:18.175661       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 12:52:18.177200       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 12:52:18.178863       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:52:18.179142       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 12:52:18.179273       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:52:18.179300       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 12:52:18.185521       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:52:18.185539       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 12:52:18.185547       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 12:52:18.190741       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 12:52:18.191683       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:52:18.191698       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 12:52:18.192857       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 12:52:18.195102       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 12:52:18.198403       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:52:18.198458       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:52:18.200525       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [e4ca43f4f6043f242e54cacc117ecafdddba7c52f5e782eaac1f1a294095d562] <==
	I1019 12:52:16.062578       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:52:16.117415       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:52:16.218377       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:52:16.218412       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1019 12:52:16.218519       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:52:16.237880       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:52:16.237937       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:52:16.242845       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:52:16.243272       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:52:16.243309       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:16.246290       1 config.go:200] "Starting service config controller"
	I1019 12:52:16.246312       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:52:16.246343       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:52:16.246350       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:52:16.246392       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:52:16.246462       1 config.go:309] "Starting node config controller"
	I1019 12:52:16.246481       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:52:16.246489       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:52:16.246652       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:52:16.346861       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:52:16.346905       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:52:16.346994       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f7b8547c0e92276ea4aa3de0d1355f2d469801e321a4bd5e24851ac65d15e3d7] <==
	I1019 12:52:13.501226       1 serving.go:386] Generated self-signed cert in-memory
	W1019 12:52:14.695726       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 12:52:14.695776       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 12:52:14.695789       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 12:52:14.695797       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 12:52:14.729288       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 12:52:14.729323       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:14.736355       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:14.736388       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:14.737300       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:52:14.737690       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 12:52:14.836762       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:52:15 no-preload-561408 kubelet[706]: I1019 12:52:15.745107     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e5712d3-d393-4b98-8346-442229d87b07-xtables-lock\") pod \"kindnet-kq4cq\" (UID: \"1e5712d3-d393-4b98-8346-442229d87b07\") " pod="kube-system/kindnet-kq4cq"
	Oct 19 12:52:18 no-preload-561408 kubelet[706]: I1019 12:52:18.865066     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb87p\" (UniqueName: \"kubernetes.io/projected/07c4ccb8-982b-4055-8676-f081e5190ce4-kube-api-access-tb87p\") pod \"kubernetes-dashboard-855c9754f9-hm7lm\" (UID: \"07c4ccb8-982b-4055-8676-f081e5190ce4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm7lm"
	Oct 19 12:52:18 no-preload-561408 kubelet[706]: I1019 12:52:18.865144     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/07c4ccb8-982b-4055-8676-f081e5190ce4-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-hm7lm\" (UID: \"07c4ccb8-982b-4055-8676-f081e5190ce4\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm7lm"
	Oct 19 12:52:18 no-preload-561408 kubelet[706]: I1019 12:52:18.865199     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a75ac11f-ac61-469e-8fa3-20312154a189-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-lrrh4\" (UID: \"a75ac11f-ac61-469e-8fa3-20312154a189\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4"
	Oct 19 12:52:18 no-preload-561408 kubelet[706]: I1019 12:52:18.865279     706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzqc7\" (UniqueName: \"kubernetes.io/projected/a75ac11f-ac61-469e-8fa3-20312154a189-kube-api-access-mzqc7\") pod \"dashboard-metrics-scraper-6ffb444bf9-lrrh4\" (UID: \"a75ac11f-ac61-469e-8fa3-20312154a189\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4"
	Oct 19 12:52:24 no-preload-561408 kubelet[706]: I1019 12:52:24.762322     706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hm7lm" podStartSLOduration=3.042860041 podStartE2EDuration="6.762283723s" podCreationTimestamp="2025-10-19 12:52:18 +0000 UTC" firstStartedPulling="2025-10-19 12:52:19.120033557 +0000 UTC m=+6.554615620" lastFinishedPulling="2025-10-19 12:52:22.839457222 +0000 UTC m=+10.274039302" observedRunningTime="2025-10-19 12:52:23.756684629 +0000 UTC m=+11.191266697" watchObservedRunningTime="2025-10-19 12:52:24.762283723 +0000 UTC m=+12.196865806"
	Oct 19 12:52:25 no-preload-561408 kubelet[706]: I1019 12:52:25.702307     706 scope.go:117] "RemoveContainer" containerID="c22f77748bb61f6fc3f9db7dba2352ad956c10339941579456a85d86f80d7cb2"
	Oct 19 12:52:26 no-preload-561408 kubelet[706]: I1019 12:52:26.706174     706 scope.go:117] "RemoveContainer" containerID="c22f77748bb61f6fc3f9db7dba2352ad956c10339941579456a85d86f80d7cb2"
	Oct 19 12:52:26 no-preload-561408 kubelet[706]: I1019 12:52:26.706345     706 scope.go:117] "RemoveContainer" containerID="6bb9fca8cb91e92c634a0fe57c08beb4f3fbe3bb2b9300a3533d146d5079c6f6"
	Oct 19 12:52:26 no-preload-561408 kubelet[706]: E1019 12:52:26.706629     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrrh4_kubernetes-dashboard(a75ac11f-ac61-469e-8fa3-20312154a189)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4" podUID="a75ac11f-ac61-469e-8fa3-20312154a189"
	Oct 19 12:52:27 no-preload-561408 kubelet[706]: I1019 12:52:27.710028     706 scope.go:117] "RemoveContainer" containerID="6bb9fca8cb91e92c634a0fe57c08beb4f3fbe3bb2b9300a3533d146d5079c6f6"
	Oct 19 12:52:27 no-preload-561408 kubelet[706]: E1019 12:52:27.710196     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrrh4_kubernetes-dashboard(a75ac11f-ac61-469e-8fa3-20312154a189)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4" podUID="a75ac11f-ac61-469e-8fa3-20312154a189"
	Oct 19 12:52:35 no-preload-561408 kubelet[706]: I1019 12:52:35.102198     706 scope.go:117] "RemoveContainer" containerID="6bb9fca8cb91e92c634a0fe57c08beb4f3fbe3bb2b9300a3533d146d5079c6f6"
	Oct 19 12:52:35 no-preload-561408 kubelet[706]: E1019 12:52:35.102439     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrrh4_kubernetes-dashboard(a75ac11f-ac61-469e-8fa3-20312154a189)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4" podUID="a75ac11f-ac61-469e-8fa3-20312154a189"
	Oct 19 12:52:46 no-preload-561408 kubelet[706]: I1019 12:52:46.650974     706 scope.go:117] "RemoveContainer" containerID="6bb9fca8cb91e92c634a0fe57c08beb4f3fbe3bb2b9300a3533d146d5079c6f6"
	Oct 19 12:52:46 no-preload-561408 kubelet[706]: I1019 12:52:46.757261     706 scope.go:117] "RemoveContainer" containerID="6bb9fca8cb91e92c634a0fe57c08beb4f3fbe3bb2b9300a3533d146d5079c6f6"
	Oct 19 12:52:46 no-preload-561408 kubelet[706]: I1019 12:52:46.757517     706 scope.go:117] "RemoveContainer" containerID="df77f4d327ae80f60bf8d9478cc89af7ea33c43e5e8c28c0916303da469e7af3"
	Oct 19 12:52:46 no-preload-561408 kubelet[706]: E1019 12:52:46.757750     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrrh4_kubernetes-dashboard(a75ac11f-ac61-469e-8fa3-20312154a189)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4" podUID="a75ac11f-ac61-469e-8fa3-20312154a189"
	Oct 19 12:52:46 no-preload-561408 kubelet[706]: I1019 12:52:46.759069     706 scope.go:117] "RemoveContainer" containerID="020c85d371fff781f4756c6e8c355ddb7bd7f5a0962e17c03bbb71f5670fd818"
	Oct 19 12:52:55 no-preload-561408 kubelet[706]: I1019 12:52:55.102766     706 scope.go:117] "RemoveContainer" containerID="df77f4d327ae80f60bf8d9478cc89af7ea33c43e5e8c28c0916303da469e7af3"
	Oct 19 12:52:55 no-preload-561408 kubelet[706]: E1019 12:52:55.103034     706 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-lrrh4_kubernetes-dashboard(a75ac11f-ac61-469e-8fa3-20312154a189)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-lrrh4" podUID="a75ac11f-ac61-469e-8fa3-20312154a189"
	Oct 19 12:53:01 no-preload-561408 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 12:53:01 no-preload-561408 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 12:53:01 no-preload-561408 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 12:53:01 no-preload-561408 systemd[1]: kubelet.service: Consumed 1.558s CPU time.
	
	
	==> kubernetes-dashboard [5799985fefa34297176d719d0444775a1e3245e7e4e852cb78f47add03751360] <==
	2025/10/19 12:52:22 Starting overwatch
	2025/10/19 12:52:22 Using namespace: kubernetes-dashboard
	2025/10/19 12:52:22 Using in-cluster config to connect to apiserver
	2025/10/19 12:52:22 Using secret token for csrf signing
	2025/10/19 12:52:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 12:52:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 12:52:22 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 12:52:22 Generating JWE encryption key
	2025/10/19 12:52:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 12:52:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 12:52:23 Initializing JWE encryption key from synchronized object
	2025/10/19 12:52:23 Creating in-cluster Sidecar client
	2025/10/19 12:52:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 12:52:23 Serving insecurely on HTTP port: 9090
	2025/10/19 12:52:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [020c85d371fff781f4756c6e8c355ddb7bd7f5a0962e17c03bbb71f5670fd818] <==
	I1019 12:52:16.034546       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 12:52:46.036867       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ea70d04b3723054d0048f663e93576611305094165f6e15c68c81dddbc07caf0] <==
	I1019 12:52:46.815056       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:52:46.822859       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:52:46.822912       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 12:52:46.825317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:50.280751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:54.541393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:52:58.140509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:01.201864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:04.225343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:04.230369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:53:04.230593       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:53:04.230700       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2a2da65-ffdf-4b5c-be11-c5e8f123ddea", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-561408_8e6b66e1-f2f3-4f5d-8761-25f3d8b329f5 became leader
	I1019 12:53:04.230798       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-561408_8e6b66e1-f2f3-4f5d-8761-25f3d8b329f5!
	W1019 12:53:04.232947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:04.238211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:53:04.331087       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-561408_8e6b66e1-f2f3-4f5d-8761-25f3d8b329f5!
	W1019 12:53:06.242245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:06.250027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-561408 -n no-preload-561408
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-561408 -n no-preload-561408: exit status 2 (366.81298ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-561408 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-190708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-190708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (253.163711ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-190708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-190708
helpers_test.go:243: (dbg) docker inspect newest-cni-190708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e",
	        "Created": "2025-10-19T12:53:16.899890869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 673365,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:53:16.932673372Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e/hostname",
	        "HostsPath": "/var/lib/docker/containers/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e/hosts",
	        "LogPath": "/var/lib/docker/containers/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e-json.log",
	        "Name": "/newest-cni-190708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-190708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-190708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e",
	                "LowerDir": "/var/lib/docker/overlay2/653c8d1502eed2b75e202821e542d41034ff9a79f47523da97128e4604cb9c97-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/653c8d1502eed2b75e202821e542d41034ff9a79f47523da97128e4604cb9c97/merged",
	                "UpperDir": "/var/lib/docker/overlay2/653c8d1502eed2b75e202821e542d41034ff9a79f47523da97128e4604cb9c97/diff",
	                "WorkDir": "/var/lib/docker/overlay2/653c8d1502eed2b75e202821e542d41034ff9a79f47523da97128e4604cb9c97/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-190708",
	                "Source": "/var/lib/docker/volumes/newest-cni-190708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-190708",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-190708",
	                "name.minikube.sigs.k8s.io": "newest-cni-190708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6c17826221796c6a20c3bd1de41c1d95fe0203f832e9926dca83d3dc814283c8",
	            "SandboxKey": "/var/run/docker/netns/6c1782622179",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-190708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:2b:59:9a:d4:ec",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f377a8132f38263e0c4abe3d087c7fa64425e9bfe055ce9e280edbfae9e21983",
	                    "EndpointID": "f6929b4f46237ebac0407efedc29799b5683d425cdf9bea8f1b91b10dfe685f8",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-190708",
	                        "058030ae05d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190708 -n newest-cni-190708
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-190708 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p no-preload-561408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │                     │
	│ stop    │ -p no-preload-561408 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-577062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p no-preload-561408 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p embed-certs-123864 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-999693 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-123864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-999693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ old-k8s-version-577062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p old-k8s-version-577062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ no-preload-561408 image list --format=json                                                                                                                                                                                                    │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p no-preload-561408 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ start   │ -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-190708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:53:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:53:11.615027  672737 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:53:11.615299  672737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:11.615311  672737 out.go:374] Setting ErrFile to fd 2...
	I1019 12:53:11.615315  672737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:11.615551  672737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:53:11.616038  672737 out.go:368] Setting JSON to false
	I1019 12:53:11.617746  672737 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9340,"bootTime":1760869052,"procs":566,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:53:11.617846  672737 start.go:141] virtualization: kvm guest
	I1019 12:53:11.619915  672737 out.go:179] * [newest-cni-190708] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:53:11.621699  672737 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:53:11.621736  672737 notify.go:220] Checking for updates...
	I1019 12:53:11.624129  672737 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:53:11.626246  672737 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:53:11.627453  672737 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:53:11.628681  672737 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:53:11.629995  672737 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:53:11.631642  672737 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:11.631786  672737 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:11.631990  672737 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:53:11.658136  672737 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:53:11.658233  672737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:11.722933  672737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-19 12:53:11.711540262 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:11.723046  672737 docker.go:318] overlay module found
	I1019 12:53:11.724874  672737 out.go:179] * Using the docker driver based on user configuration
	I1019 12:53:11.726372  672737 start.go:305] selected driver: docker
	I1019 12:53:11.726394  672737 start.go:925] validating driver "docker" against <nil>
	I1019 12:53:11.726412  672737 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:53:11.727020  672737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:11.787909  672737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-19 12:53:11.778156597 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:11.788107  672737 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1019 12:53:11.788149  672737 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1019 12:53:11.788529  672737 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:53:11.790331  672737 out.go:179] * Using Docker driver with root privileges
	I1019 12:53:11.791430  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:11.791511  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:11.791528  672737 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:53:11.791587  672737 start.go:349] cluster config:
	{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:11.792873  672737 out.go:179] * Starting "newest-cni-190708" primary control-plane node in "newest-cni-190708" cluster
	I1019 12:53:11.794127  672737 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:53:11.795216  672737 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:53:11.796409  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:11.796465  672737 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:53:11.796477  672737 cache.go:58] Caching tarball of preloaded images
	I1019 12:53:11.796486  672737 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:53:11.796551  672737 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:53:11.796562  672737 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:53:11.796649  672737 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:11.796666  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json: {Name:mk458b42b0f9f21f6e5af311f76e8caf9c4c5efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:11.816881  672737 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:53:11.816898  672737 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:53:11.816920  672737 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:53:11.816943  672737 start.go:360] acquireMachinesLock for newest-cni-190708: {Name:mk77ff67117e187a78edba04cd47af082236de6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:53:11.817032  672737 start.go:364] duration metric: took 74.015µs to acquireMachinesLock for "newest-cni-190708"
	I1019 12:53:11.817054  672737 start.go:93] Provisioning new machine with config: &{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:53:11.817117  672737 start.go:125] createHost starting for "" (driver="docker")
	W1019 12:53:09.146473  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:11.146837  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:10.296323  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:12.795707  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:11.818963  672737 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 12:53:11.819197  672737 start.go:159] libmachine.API.Create for "newest-cni-190708" (driver="docker")
	I1019 12:53:11.819227  672737 client.go:168] LocalClient.Create starting
	I1019 12:53:11.819287  672737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem
	I1019 12:53:11.819320  672737 main.go:141] libmachine: Decoding PEM data...
	I1019 12:53:11.819338  672737 main.go:141] libmachine: Parsing certificate...
	I1019 12:53:11.819384  672737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem
	I1019 12:53:11.819402  672737 main.go:141] libmachine: Decoding PEM data...
	I1019 12:53:11.819412  672737 main.go:141] libmachine: Parsing certificate...
	I1019 12:53:11.819803  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:53:11.837346  672737 cli_runner.go:211] docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:53:11.837404  672737 network_create.go:284] running [docker network inspect newest-cni-190708] to gather additional debugging logs...
	I1019 12:53:11.837466  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708
	W1019 12:53:11.853768  672737 cli_runner.go:211] docker network inspect newest-cni-190708 returned with exit code 1
	I1019 12:53:11.853794  672737 network_create.go:287] error running [docker network inspect newest-cni-190708]: docker network inspect newest-cni-190708: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-190708 not found
	I1019 12:53:11.853806  672737 network_create.go:289] output of [docker network inspect newest-cni-190708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-190708 not found
	
	** /stderr **
	I1019 12:53:11.853902  672737 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:53:11.872131  672737 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a4629926c406 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:8c:3f:62:13:f6} reservation:<nil>}
	I1019 12:53:11.872777  672737 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6cccd776798e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:1b:39:ab:6e:7b} reservation:<nil>}
	I1019 12:53:11.873176  672737 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-91914a6ce07e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:1c:aa:a8:a4:4a} reservation:<nil>}
	I1019 12:53:11.873710  672737 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fcd0a3e89589 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:54:90:aa:5c:46} reservation:<nil>}
	I1019 12:53:11.874346  672737 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-de90530a2892 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f2:1b:d3:5b:94:95} reservation:<nil>}
	I1019 12:53:11.875186  672737 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e7d700}
	I1019 12:53:11.875210  672737 network_create.go:124] attempt to create docker network newest-cni-190708 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1019 12:53:11.875256  672737 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-190708 newest-cni-190708
	I1019 12:53:11.933015  672737 network_create.go:108] docker network newest-cni-190708 192.168.94.0/24 created
	I1019 12:53:11.933049  672737 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-190708" container
	I1019 12:53:11.933120  672737 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:53:11.950774  672737 cli_runner.go:164] Run: docker volume create newest-cni-190708 --label name.minikube.sigs.k8s.io=newest-cni-190708 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:53:11.967572  672737 oci.go:103] Successfully created a docker volume newest-cni-190708
	I1019 12:53:11.967650  672737 cli_runner.go:164] Run: docker run --rm --name newest-cni-190708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-190708 --entrypoint /usr/bin/test -v newest-cni-190708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:53:12.367353  672737 oci.go:107] Successfully prepared a docker volume newest-cni-190708
	I1019 12:53:12.367407  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:12.367450  672737 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:53:12.367533  672737 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-190708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 12:53:13.646716  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:15.646757  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:15.295646  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:17.297846  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:16.825912  672737 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-190708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.458335671s)
	I1019 12:53:16.825946  672737 kic.go:203] duration metric: took 4.45849341s to extract preloaded images to volume ...
	W1019 12:53:16.826042  672737 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 12:53:16.826073  672737 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 12:53:16.826110  672737 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 12:53:16.883735  672737 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-190708 --name newest-cni-190708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-190708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-190708 --network newest-cni-190708 --ip 192.168.94.2 --volume newest-cni-190708:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 12:53:17.149721  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Running}}
	I1019 12:53:17.168092  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.187070  672737 cli_runner.go:164] Run: docker exec newest-cni-190708 stat /var/lib/dpkg/alternatives/iptables
	I1019 12:53:17.235594  672737 oci.go:144] the created container "newest-cni-190708" has a running status.
	I1019 12:53:17.235624  672737 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa...
	I1019 12:53:17.641114  672737 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 12:53:17.666983  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.686164  672737 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 12:53:17.686197  672737 kic_runner.go:114] Args: [docker exec --privileged newest-cni-190708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 12:53:17.730607  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.748800  672737 machine.go:93] provisionDockerMachine start ...
	I1019 12:53:17.748886  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:17.768809  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:17.769043  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:17.769056  672737 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:53:17.904434  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:53:17.904466  672737 ubuntu.go:182] provisioning hostname "newest-cni-190708"
	I1019 12:53:17.904532  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:17.923140  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:17.923351  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:17.923364  672737 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-190708 && echo "newest-cni-190708" | sudo tee /etc/hostname
	I1019 12:53:18.066330  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:53:18.066401  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.084720  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:18.084937  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:18.084955  672737 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-190708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-190708/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-190708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:53:18.218215  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:53:18.218243  672737 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:53:18.218295  672737 ubuntu.go:190] setting up certificates
	I1019 12:53:18.218310  672737 provision.go:84] configureAuth start
	I1019 12:53:18.218377  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:18.236696  672737 provision.go:143] copyHostCerts
	I1019 12:53:18.236757  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:53:18.236768  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:53:18.236836  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:53:18.236929  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:53:18.236938  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:53:18.236966  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:53:18.237022  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:53:18.237030  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:53:18.237052  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:53:18.237101  672737 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.newest-cni-190708 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-190708]
	I1019 12:53:18.349002  672737 provision.go:177] copyRemoteCerts
	I1019 12:53:18.349061  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:53:18.349100  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.367380  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:18.464934  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:53:18.484736  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:53:18.502418  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 12:53:18.520374  672737 provision.go:87] duration metric: took 302.043863ms to configureAuth
	I1019 12:53:18.520411  672737 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:53:18.520616  672737 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:18.520715  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.539107  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:18.539337  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:18.539356  672737 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:53:18.783336  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:53:18.783368  672737 machine.go:96] duration metric: took 1.034543859s to provisionDockerMachine
	I1019 12:53:18.783380  672737 client.go:171] duration metric: took 6.964145323s to LocalClient.Create
	I1019 12:53:18.783403  672737 start.go:167] duration metric: took 6.964207211s to libmachine.API.Create "newest-cni-190708"
	I1019 12:53:18.783410  672737 start.go:293] postStartSetup for "newest-cni-190708" (driver="docker")
	I1019 12:53:18.783444  672737 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:53:18.783533  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:53:18.783575  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.802276  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:18.904329  672737 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:53:18.908177  672737 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:53:18.908210  672737 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:53:18.908222  672737 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:53:18.908267  672737 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:53:18.908346  672737 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:53:18.908470  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:53:18.916278  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:53:18.940533  672737 start.go:296] duration metric: took 157.106831ms for postStartSetup
	I1019 12:53:18.940837  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:18.959008  672737 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:18.959254  672737 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:53:18.959294  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.976265  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.069698  672737 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:53:19.074565  672737 start.go:128] duration metric: took 7.257430988s to createHost
	I1019 12:53:19.074635  672737 start.go:83] releasing machines lock for "newest-cni-190708", held for 7.257591431s
	I1019 12:53:19.074702  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:19.092846  672737 ssh_runner.go:195] Run: cat /version.json
	I1019 12:53:19.092896  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:19.092920  672737 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:53:19.092980  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:19.112049  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.112296  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.259186  672737 ssh_runner.go:195] Run: systemctl --version
	I1019 12:53:19.265848  672737 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:53:19.301474  672737 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:53:19.306225  672737 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:53:19.306297  672737 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:53:19.331979  672737 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 12:53:19.332008  672737 start.go:495] detecting cgroup driver to use...
	I1019 12:53:19.332048  672737 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:53:19.332111  672737 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:53:19.348084  672737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:53:19.360773  672737 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:53:19.360844  672737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:53:19.377948  672737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:53:19.395822  672737 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:53:19.484678  672737 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:53:19.575544  672737 docker.go:234] disabling docker service ...
	I1019 12:53:19.575618  672737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:53:19.595378  672737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:53:19.608092  672737 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:53:19.693958  672737 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:53:19.776371  672737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:53:19.789375  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:53:19.804627  672737 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:53:19.804704  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.814787  672737 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:53:19.814837  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.823551  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.832169  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.840784  672737 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:53:19.848724  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.857100  672737 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.870352  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.878731  672737 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:53:19.886348  672737 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:53:19.893759  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:19.973321  672737 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:53:20.077881  672737 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:53:20.077979  672737 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:53:20.082037  672737 start.go:563] Will wait 60s for crictl version
	I1019 12:53:20.082093  672737 ssh_runner.go:195] Run: which crictl
	I1019 12:53:20.085569  672737 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:53:20.109837  672737 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:53:20.109920  672737 ssh_runner.go:195] Run: crio --version
	I1019 12:53:20.138350  672737 ssh_runner.go:195] Run: crio --version
	I1019 12:53:20.168482  672737 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:53:20.169863  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:53:20.188025  672737 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1019 12:53:20.192265  672737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:53:20.203815  672737 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 12:53:20.205047  672737 kubeadm.go:883] updating cluster {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:53:20.205149  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:20.205199  672737 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:53:20.236514  672737 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:53:20.236536  672737 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:53:20.236581  672737 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:53:20.262051  672737 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:53:20.262073  672737 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:53:20.262080  672737 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1019 12:53:20.262171  672737 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-190708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:53:20.262247  672737 ssh_runner.go:195] Run: crio config
	I1019 12:53:20.309916  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:20.309950  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:20.309973  672737 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 12:53:20.310003  672737 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190708 NodeName:newest-cni-190708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:53:20.310145  672737 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-190708"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:53:20.310214  672737 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:53:20.318657  672737 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:53:20.318731  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:53:20.326554  672737 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 12:53:20.339030  672737 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:53:20.354155  672737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1019 12:53:20.366696  672737 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:53:20.370356  672737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:53:20.380455  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:20.458942  672737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:53:20.485015  672737 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708 for IP: 192.168.94.2
	I1019 12:53:20.485043  672737 certs.go:195] generating shared ca certs ...
	I1019 12:53:20.485070  672737 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.485221  672737 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:53:20.485264  672737 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:53:20.485275  672737 certs.go:257] generating profile certs ...
	I1019 12:53:20.485328  672737 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key
	I1019 12:53:20.485348  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt with IP's: []
	I1019 12:53:20.585551  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt ...
	I1019 12:53:20.585580  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt: {Name:mk5251db26990dc5997b9e5853758832f57cf196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.585769  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key ...
	I1019 12:53:20.585781  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key: {Name:mk05802bac0f3e5b3a8b334617d45fe07eee0068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.585867  672737 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd
	I1019 12:53:20.585883  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1019 12:53:20.684366  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd ...
	I1019 12:53:20.684395  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd: {Name:mk395ac2723daa6eac9a1a5448aa56dcc3dae795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.684562  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd ...
	I1019 12:53:20.684576  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd: {Name:mk1d126d0c5513551abbae58673dc597e26ffe4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.684650  672737 certs.go:382] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt
	I1019 12:53:20.684722  672737 certs.go:386] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key
	I1019 12:53:20.684776  672737 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key
	I1019 12:53:20.684791  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt with IP's: []
	I1019 12:53:20.821306  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt ...
	I1019 12:53:20.821336  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt: {Name:mkf04fb8bbf161179ae86ba91d4a80f873fae21e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.821524  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key ...
	I1019 12:53:20.821544  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key: {Name:mk22ac123e8932e8db98bd277997b637ec873079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.821743  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:53:20.821779  672737 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:53:20.821789  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:53:20.821812  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:53:20.821834  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:53:20.821860  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:53:20.821901  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:53:20.822529  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:53:20.843244  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:53:20.860464  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:53:20.877640  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:53:20.895480  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 12:53:20.912797  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:53:20.929757  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:53:20.947521  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:53:20.964869  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:53:20.984248  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:53:21.003061  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:53:21.020532  672737 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:53:21.033435  672737 ssh_runner.go:195] Run: openssl version
	I1019 12:53:21.040056  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:53:21.049001  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.052716  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.052781  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.088149  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:53:21.097154  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:53:21.105495  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.109154  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.109216  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.144296  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:53:21.153347  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:53:21.161940  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.165605  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.165655  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.199345  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:53:21.208215  672737 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:53:21.212056  672737 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 12:53:21.212119  672737 kubeadm.go:400] StartCluster: {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:21.212215  672737 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:53:21.212265  672737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:53:21.240234  672737 cri.go:89] found id: ""
	I1019 12:53:21.240301  672737 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:53:21.248582  672737 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:53:21.256728  672737 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 12:53:21.256801  672737 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:53:21.265096  672737 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 12:53:21.265135  672737 kubeadm.go:157] found existing configuration files:
	
	I1019 12:53:21.265192  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 12:53:21.273544  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 12:53:21.273612  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 12:53:21.282090  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 12:53:21.290396  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 12:53:21.290490  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:53:21.300201  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 12:53:21.308252  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 12:53:21.308306  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:53:21.315749  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 12:53:21.323167  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 12:53:21.323239  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:53:21.330315  672737 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 12:53:21.369107  672737 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 12:53:21.369180  672737 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 12:53:21.390319  672737 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 12:53:21.390379  672737 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 12:53:21.390409  672737 kubeadm.go:318] OS: Linux
	I1019 12:53:21.390480  672737 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 12:53:21.390540  672737 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 12:53:21.390652  672737 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 12:53:21.390735  672737 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 12:53:21.390790  672737 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 12:53:21.390890  672737 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 12:53:21.390973  672737 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 12:53:21.391026  672737 kubeadm.go:318] CGROUPS_IO: enabled
	I1019 12:53:21.449690  672737 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 12:53:21.449859  672737 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 12:53:21.449988  672737 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 12:53:21.458017  672737 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 12:53:21.459979  672737 out.go:252]   - Generating certificates and keys ...
	I1019 12:53:21.460084  672737 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 12:53:21.460184  672737 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1019 12:53:17.646821  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:19.647689  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:19.795394  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:21.795584  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:23.796166  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:21.782609  672737 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 12:53:22.004817  672737 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 12:53:22.154911  672737 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 12:53:22.730145  672737 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 12:53:22.932723  672737 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 12:53:22.932904  672737 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-190708] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 12:53:23.243959  672737 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 12:53:23.244120  672737 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-190708] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 12:53:23.410854  672737 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 12:53:23.472366  672737 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 12:53:23.643869  672737 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 12:53:23.644033  672737 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 12:53:23.711987  672737 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 12:53:24.037993  672737 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 12:53:24.501726  672737 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 12:53:24.744523  672737 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 12:53:24.859147  672737 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 12:53:24.859688  672737 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 12:53:24.863264  672737 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 12:53:24.864642  672737 out.go:252]   - Booting up control plane ...
	I1019 12:53:24.864730  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 12:53:24.864796  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 12:53:24.865498  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 12:53:24.879079  672737 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 12:53:24.879207  672737 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 12:53:24.886821  672737 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 12:53:24.887101  672737 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 12:53:24.887199  672737 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 12:53:24.983491  672737 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 12:53:24.983708  672737 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 12:53:25.984614  672737 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001307224s
	I1019 12:53:25.988599  672737 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 12:53:25.988724  672737 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1019 12:53:25.988848  672737 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 12:53:25.988960  672737 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1019 12:53:22.146944  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:24.647501  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:26.295683  663517 pod_ready.go:94] pod "coredns-66bc5c9577-bw9l4" is "Ready"
	I1019 12:53:26.295713  663517 pod_ready.go:86] duration metric: took 31.505627238s for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.297917  663517 pod_ready.go:83] waiting for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.301953  663517 pod_ready.go:94] pod "etcd-embed-certs-123864" is "Ready"
	I1019 12:53:26.301978  663517 pod_ready.go:86] duration metric: took 4.035262ms for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.304112  663517 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.308120  663517 pod_ready.go:94] pod "kube-apiserver-embed-certs-123864" is "Ready"
	I1019 12:53:26.308144  663517 pod_ready.go:86] duration metric: took 4.009533ms for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.309999  663517 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.494192  663517 pod_ready.go:94] pod "kube-controller-manager-embed-certs-123864" is "Ready"
	I1019 12:53:26.494219  663517 pod_ready.go:86] duration metric: took 184.199033ms for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.694487  663517 pod_ready.go:83] waiting for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.094397  663517 pod_ready.go:94] pod "kube-proxy-gvrcz" is "Ready"
	I1019 12:53:27.094457  663517 pod_ready.go:86] duration metric: took 399.93585ms for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.293675  663517 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.694119  663517 pod_ready.go:94] pod "kube-scheduler-embed-certs-123864" is "Ready"
	I1019 12:53:27.694146  663517 pod_ready.go:86] duration metric: took 400.447048ms for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.694158  663517 pod_ready.go:40] duration metric: took 32.912525222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:53:27.746279  663517 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:27.748237  663517 out.go:179] * Done! kubectl is now configured to use "embed-certs-123864" cluster and "default" namespace by default
	I1019 12:53:27.518915  672737 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.530228054s
	I1019 12:53:28.053793  672737 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.061152071s
	I1019 12:53:29.990081  672737 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001429284s
	I1019 12:53:30.001867  672737 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 12:53:30.014037  672737 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 12:53:30.024140  672737 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 12:53:30.024456  672737 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-190708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 12:53:30.033264  672737 kubeadm.go:318] [bootstrap-token] Using token: gtkds1.9e0h8pmw5r5mqwja
	I1019 12:53:30.034587  672737 out.go:252]   - Configuring RBAC rules ...
	I1019 12:53:30.034754  672737 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 12:53:30.038773  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 12:53:30.045039  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 12:53:30.049009  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 12:53:30.052044  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 12:53:30.054665  672737 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 12:53:30.397490  672737 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 12:53:30.827821  672737 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 12:53:31.396481  672737 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 12:53:31.397310  672737 kubeadm.go:318] 
	I1019 12:53:31.397402  672737 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 12:53:31.397413  672737 kubeadm.go:318] 
	I1019 12:53:31.397551  672737 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 12:53:31.397565  672737 kubeadm.go:318] 
	I1019 12:53:31.397596  672737 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 12:53:31.397650  672737 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 12:53:31.397698  672737 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 12:53:31.397705  672737 kubeadm.go:318] 
	I1019 12:53:31.397749  672737 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 12:53:31.397755  672737 kubeadm.go:318] 
	I1019 12:53:31.397794  672737 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 12:53:31.397800  672737 kubeadm.go:318] 
	I1019 12:53:31.397861  672737 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 12:53:31.397953  672737 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 12:53:31.398040  672737 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 12:53:31.398051  672737 kubeadm.go:318] 
	I1019 12:53:31.398140  672737 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 12:53:31.398207  672737 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 12:53:31.398213  672737 kubeadm.go:318] 
	I1019 12:53:31.398292  672737 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token gtkds1.9e0h8pmw5r5mqwja \
	I1019 12:53:31.398378  672737 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 \
	I1019 12:53:31.398399  672737 kubeadm.go:318] 	--control-plane 
	I1019 12:53:31.398405  672737 kubeadm.go:318] 
	I1019 12:53:31.398523  672737 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 12:53:31.398534  672737 kubeadm.go:318] 
	I1019 12:53:31.398627  672737 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token gtkds1.9e0h8pmw5r5mqwja \
	I1019 12:53:31.398790  672737 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 
	I1019 12:53:31.401824  672737 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 12:53:31.402002  672737 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 12:53:31.402023  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:31.402032  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:31.403960  672737 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 12:53:31.405314  672737 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 12:53:31.410474  672737 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 12:53:31.410496  672737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 12:53:31.424273  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1019 12:53:27.147074  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:29.645647  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:31.646857  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:31.641912  672737 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:53:31.642008  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:31.642011  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-190708 minikube.k8s.io/updated_at=2025_10_19T12_53_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=newest-cni-190708 minikube.k8s.io/primary=true
	I1019 12:53:31.652529  672737 ops.go:34] apiserver oom_adj: -16
	I1019 12:53:31.718996  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:32.219629  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:32.719834  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:33.219813  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:33.719692  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:34.219076  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:34.719433  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.219917  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.719034  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.785029  672737 kubeadm.go:1113] duration metric: took 4.143080971s to wait for elevateKubeSystemPrivileges
	I1019 12:53:35.785068  672737 kubeadm.go:402] duration metric: took 14.57295181s to StartCluster
	I1019 12:53:35.785101  672737 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:35.785174  672737 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:53:35.787497  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:35.787794  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:53:35.787820  672737 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:53:35.787897  672737 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:53:35.787993  672737 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-190708"
	I1019 12:53:35.788017  672737 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-190708"
	I1019 12:53:35.788020  672737 addons.go:69] Setting default-storageclass=true in profile "newest-cni-190708"
	I1019 12:53:35.788053  672737 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-190708"
	I1019 12:53:35.788062  672737 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:35.788057  672737 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:53:35.788500  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.788555  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.789512  672737 out.go:179] * Verifying Kubernetes components...
	I1019 12:53:35.791378  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:35.812380  672737 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1019 12:53:33.646988  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:34.648076  664256 pod_ready.go:94] pod "coredns-66bc5c9577-hftjp" is "Ready"
	I1019 12:53:34.648104  664256 pod_ready.go:86] duration metric: took 36.507165259s for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.650741  664256 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.654523  664256 pod_ready.go:94] pod "etcd-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.654547  664256 pod_ready.go:86] duration metric: took 3.785206ms for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.656429  664256 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.660685  664256 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.660712  664256 pod_ready.go:86] duration metric: took 4.258461ms for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.662348  664256 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.844857  664256 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.844886  664256 pod_ready.go:86] duration metric: took 182.521582ms for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.044783  664256 pod_ready.go:83] waiting for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.445005  664256 pod_ready.go:94] pod "kube-proxy-cjxjt" is "Ready"
	I1019 12:53:35.445031  664256 pod_ready.go:86] duration metric: took 400.222332ms for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.645060  664256 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:36.045246  664256 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:36.045282  664256 pod_ready.go:86] duration metric: took 400.190569ms for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:36.045298  664256 pod_ready.go:40] duration metric: took 37.908676389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:53:36.105764  664256 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:36.108299  664256 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-999693" cluster and "default" namespace by default
	I1019 12:53:35.813186  672737 addons.go:238] Setting addon default-storageclass=true in "newest-cni-190708"
	I1019 12:53:35.813237  672737 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:53:35.813735  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.815209  672737 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:53:35.815225  672737 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:53:35.815282  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:35.843451  672737 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:53:35.843479  672737 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:53:35.843567  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:35.844218  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:35.868726  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:35.877614  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:53:35.929249  672737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:53:35.955142  672737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:53:35.988275  672737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:53:36.052147  672737 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1019 12:53:36.053790  672737 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:53:36.053847  672737 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:53:36.305744  672737 api_server.go:72] duration metric: took 517.881771ms to wait for apiserver process to appear ...
	I1019 12:53:36.305769  672737 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:53:36.305790  672737 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:53:36.310834  672737 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1019 12:53:36.311767  672737 api_server.go:141] control plane version: v1.34.1
	I1019 12:53:36.311798  672737 api_server.go:131] duration metric: took 6.020737ms to wait for apiserver health ...
	I1019 12:53:36.311809  672737 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:53:36.313872  672737 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 12:53:36.314880  672737 system_pods.go:59] 8 kube-system pods found
	I1019 12:53:36.314917  672737 system_pods.go:61] "coredns-66bc5c9577-kp55x" [9a472ee8-8fcb-410c-92d0-6f82b4bacad7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:53:36.314933  672737 system_pods.go:61] "etcd-newest-cni-190708" [2105393f-0676-49e0-aa1c-5efd62f5148c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:53:36.314945  672737 system_pods.go:61] "kindnet-8bb9r" [eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:53:36.314955  672737 system_pods.go:61] "kube-apiserver-newest-cni-190708" [6f2a10a0-1e97-46ef-831c-c648f1ead906] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:53:36.314961  672737 system_pods.go:61] "kube-controller-manager-newest-cni-190708" [2fd054d9-c518-4415-8279-b247bb13d91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:53:36.314969  672737 system_pods.go:61] "kube-proxy-v7xgj" [9620c4c3-352a-4d93-8d43-f7a06fcd3374] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:53:36.314976  672737 system_pods.go:61] "kube-scheduler-newest-cni-190708" [8d1175ee-58dc-471d-856b-87d65a82c0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:53:36.314981  672737 system_pods.go:61] "storage-provisioner" [d9659c6a-9cea-4234-aaf7-baafb55fcf58] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:53:36.314992  672737 system_pods.go:74] duration metric: took 3.173905ms to wait for pod list to return data ...
	I1019 12:53:36.315000  672737 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:53:36.315055  672737 addons.go:514] duration metric: took 527.155312ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 12:53:36.317196  672737 default_sa.go:45] found service account: "default"
	I1019 12:53:36.317218  672737 default_sa.go:55] duration metric: took 2.212206ms for default service account to be created ...
	I1019 12:53:36.317230  672737 kubeadm.go:586] duration metric: took 529.375092ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:53:36.317251  672737 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:53:36.319523  672737 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:53:36.319545  672737 node_conditions.go:123] node cpu capacity is 8
	I1019 12:53:36.319557  672737 node_conditions.go:105] duration metric: took 2.300039ms to run NodePressure ...
	I1019 12:53:36.319567  672737 start.go:241] waiting for startup goroutines ...
	I1019 12:53:36.557265  672737 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-190708" context rescaled to 1 replicas
	I1019 12:53:36.557311  672737 start.go:246] waiting for cluster config update ...
	I1019 12:53:36.557328  672737 start.go:255] writing updated cluster config ...
	I1019 12:53:36.557703  672737 ssh_runner.go:195] Run: rm -f paused
	I1019 12:53:36.609706  672737 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:36.612691  672737 out.go:179] * Done! kubectl is now configured to use "newest-cni-190708" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.437661035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.440999553Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1201df93-ba30-4350-8609-4991c15843d7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.441432644Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e4ed53e7-4e1f-4070-9cb3-e81a08a5d2be name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.442715556Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.443642252Z" level=info msg="Ran pod sandbox 4d1a5638c7706b2ff582449b77c08a2fce3f642c9106642d7efa7a258a7a66e3 with infra container: kube-system/kube-proxy-v7xgj/POD" id=1201df93-ba30-4350-8609-4991c15843d7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.444171745Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.445057145Z" level=info msg="Ran pod sandbox 07482fcea2c3c161d7421a4458eec2d7254710682983b02a4c36758254f4c7dd with infra container: kube-system/kindnet-8bb9r/POD" id=e4ed53e7-4e1f-4070-9cb3-e81a08a5d2be name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.445142998Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b2c101fa-b02a-46e2-a677-2207f56d6f24 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.446370658Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=cebc47b2-db02-454b-aeaa-b8772cae7c8c name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.446464309Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=95834a93-c810-4010-9da2-e534cc68456d name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.448084555Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e0208e1f-ee61-4afc-803a-ad8c27d5db1f name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.451259343Z" level=info msg="Creating container: kube-system/kube-proxy-v7xgj/kube-proxy" id=4c8b1837-44f0-46de-b9c8-a1040af4d29a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.4515714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.452683423Z" level=info msg="Creating container: kube-system/kindnet-8bb9r/kindnet-cni" id=83e68c84-727b-4f63-831f-b455e40e474f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.454326334Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.457028014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.457598121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.458673818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.459185701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.486984141Z" level=info msg="Created container 29c1babe5647d866703432f0884a92ba1f637d7b2decdc517ad23a78dfb7a0fb: kube-system/kindnet-8bb9r/kindnet-cni" id=83e68c84-727b-4f63-831f-b455e40e474f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.488654475Z" level=info msg="Starting container: 29c1babe5647d866703432f0884a92ba1f637d7b2decdc517ad23a78dfb7a0fb" id=e2e9e388-de0d-4abb-8158-45eaabf2b1ef name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.490543799Z" level=info msg="Started container" PID=1586 containerID=29c1babe5647d866703432f0884a92ba1f637d7b2decdc517ad23a78dfb7a0fb description=kube-system/kindnet-8bb9r/kindnet-cni id=e2e9e388-de0d-4abb-8158-45eaabf2b1ef name=/runtime.v1.RuntimeService/StartContainer sandboxID=07482fcea2c3c161d7421a4458eec2d7254710682983b02a4c36758254f4c7dd
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.490719589Z" level=info msg="Created container 658a6c1dd812faad99094c502f060e9968a49825b4aee48a749259acf5b0346b: kube-system/kube-proxy-v7xgj/kube-proxy" id=4c8b1837-44f0-46de-b9c8-a1040af4d29a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.491368734Z" level=info msg="Starting container: 658a6c1dd812faad99094c502f060e9968a49825b4aee48a749259acf5b0346b" id=a3f6f2da-9442-4d9a-80ce-446a78cdf3f6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:53:36 newest-cni-190708 crio[774]: time="2025-10-19T12:53:36.494831065Z" level=info msg="Started container" PID=1585 containerID=658a6c1dd812faad99094c502f060e9968a49825b4aee48a749259acf5b0346b description=kube-system/kube-proxy-v7xgj/kube-proxy id=a3f6f2da-9442-4d9a-80ce-446a78cdf3f6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4d1a5638c7706b2ff582449b77c08a2fce3f642c9106642d7efa7a258a7a66e3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	29c1babe5647d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   07482fcea2c3c       kindnet-8bb9r                               kube-system
	658a6c1dd812f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   4d1a5638c7706       kube-proxy-v7xgj                            kube-system
	91ce328dd3122       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   0                   9984cf72995bf       kube-controller-manager-newest-cni-190708   kube-system
	0d5a9bf4c7e10       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            0                   8bc33e975ebfe       kube-scheduler-newest-cni-190708            kube-system
	57b949e43dca3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      0                   8ab0c066eb23f       etcd-newest-cni-190708                      kube-system
	c0c2f8cf1747a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            0                   de7cec779534c       kube-apiserver-newest-cni-190708            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-190708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-190708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=newest-cni-190708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_53_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:53:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-190708
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:53:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:53:30 +0000   Sun, 19 Oct 2025 12:53:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:53:30 +0000   Sun, 19 Oct 2025 12:53:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:53:30 +0000   Sun, 19 Oct 2025 12:53:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 19 Oct 2025 12:53:30 +0000   Sun, 19 Oct 2025 12:53:26 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-190708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                4573dffe-685a-448f-8daf-99deda56b058
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-190708                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-8bb9r                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      1s
	  kube-system                 kube-apiserver-newest-cni-190708             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-190708    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-v7xgj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  kube-system                 kube-scheduler-newest-cni-190708             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 7s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node newest-cni-190708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node newest-cni-190708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node newest-cni-190708 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-190708 event: Registered Node newest-cni-190708 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [57b949e43dca30496069b46cdf6779b3f89cac145d2cb570165f1c041539f6cf] <==
	{"level":"warn","ts":"2025-10-19T12:53:27.328010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.345628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.357351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.362510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.370600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.378712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.385635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.394094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.400878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.409286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.417493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.424715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.432284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.439188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.446795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.453399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.460978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.467490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.474354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.480545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.487592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.500147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.508045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.516521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:53:27.566880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41624","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:53:37 up  2:36,  0 user,  load average: 3.44, 4.52, 3.07
	Linux newest-cni-190708 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [29c1babe5647d866703432f0884a92ba1f637d7b2decdc517ad23a78dfb7a0fb] <==
	I1019 12:53:36.662762       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:53:36.663207       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1019 12:53:36.663392       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:53:36.663412       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:53:36.663450       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:53:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:53:36.956201       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:53:36.956241       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:53:36.956382       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:53:36.956809       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:53:37.356793       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:53:37.356830       1 metrics.go:72] Registering metrics
	I1019 12:53:37.356926       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [c0c2f8cf1747a17f6cd5eea08e918d036cd08a49e662cdf67969cefe4a03264a] <==
	I1019 12:53:28.083278       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 12:53:28.083674       1 policy_source.go:240] refreshing policies
	I1019 12:53:28.085333       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:53:28.087727       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1019 12:53:28.087885       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:53:28.097377       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:53:28.099191       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:53:28.112125       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:53:28.985786       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1019 12:53:28.989552       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1019 12:53:28.989587       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:53:29.439372       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:53:29.475700       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:53:29.592406       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1019 12:53:29.598343       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1019 12:53:29.599300       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:53:29.603520       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:53:30.011096       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:53:30.808404       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:53:30.826927       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1019 12:53:30.834038       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 12:53:36.015119       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:53:36.067180       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:53:36.072747       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 12:53:36.113119       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [91ce328dd31220abbdf28848e95d501e3a14ac4dee59a70ee0181c5322590923] <==
	I1019 12:53:34.981100       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 12:53:34.988287       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 12:53:34.988359       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 12:53:34.988515       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-190708"
	I1019 12:53:34.988569       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1019 12:53:35.009341       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 12:53:35.009386       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 12:53:35.010556       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:53:35.010586       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 12:53:35.010636       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 12:53:35.010649       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 12:53:35.010638       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 12:53:35.010753       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 12:53:35.010895       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 12:53:35.010895       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1019 12:53:35.010903       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 12:53:35.010954       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:53:35.011076       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 12:53:35.011145       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:53:35.012270       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1019 12:53:35.012435       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 12:53:35.014582       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 12:53:35.015777       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:53:35.016998       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:53:35.036454       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [658a6c1dd812faad99094c502f060e9968a49825b4aee48a749259acf5b0346b] <==
	I1019 12:53:36.534315       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:53:36.595203       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:53:36.695935       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:53:36.695992       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1019 12:53:36.696092       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:53:36.717497       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:53:36.717548       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:53:36.722972       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:53:36.723522       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:53:36.723565       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:53:36.726728       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:53:36.726728       1 config.go:200] "Starting service config controller"
	I1019 12:53:36.726758       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:53:36.726762       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:53:36.726784       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:53:36.726790       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:53:36.726871       1 config.go:309] "Starting node config controller"
	I1019 12:53:36.726877       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:53:36.726883       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:53:36.827834       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 12:53:36.827862       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:53:36.827834       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0d5a9bf4c7e105c0210670e026e66dcd230fd1160f278b3760318b6579abc139] <==
	E1019 12:53:28.046152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:53:28.046316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 12:53:28.046413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1019 12:53:28.046538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 12:53:28.046600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1019 12:53:28.046919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 12:53:28.047689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:53:28.047718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 12:53:28.048485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 12:53:28.048542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:53:28.048558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 12:53:28.048639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 12:53:28.048705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:53:28.048710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:53:28.048805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:53:28.855557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 12:53:28.969237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 12:53:28.987412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 12:53:29.006536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 12:53:29.118264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 12:53:29.128829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 12:53:29.132980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 12:53:29.213714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 12:53:29.243006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1019 12:53:31.041602       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: I1019 12:53:31.625465    1304 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: I1019 12:53:31.654955    1304 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-190708"
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: I1019 12:53:31.655085    1304 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-190708"
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: I1019 12:53:31.655138    1304 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-190708"
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: I1019 12:53:31.655226    1304 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-190708"
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: E1019 12:53:31.663450    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-190708\" already exists" pod="kube-system/kube-scheduler-newest-cni-190708"
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: E1019 12:53:31.664669    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-190708\" already exists" pod="kube-system/etcd-newest-cni-190708"
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: E1019 12:53:31.665268    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-190708\" already exists" pod="kube-system/kube-controller-manager-newest-cni-190708"
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: E1019 12:53:31.665629    1304 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-190708\" already exists" pod="kube-system/kube-apiserver-newest-cni-190708"
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: I1019 12:53:31.693117    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-190708" podStartSLOduration=1.693092448 podStartE2EDuration="1.693092448s" podCreationTimestamp="2025-10-19 12:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:53:31.680063282 +0000 UTC m=+1.127500507" watchObservedRunningTime="2025-10-19 12:53:31.693092448 +0000 UTC m=+1.140529683"
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: I1019 12:53:31.702877    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-190708" podStartSLOduration=1.7028543969999999 podStartE2EDuration="1.702854397s" podCreationTimestamp="2025-10-19 12:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:53:31.693074988 +0000 UTC m=+1.140512213" watchObservedRunningTime="2025-10-19 12:53:31.702854397 +0000 UTC m=+1.150291615"
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: I1019 12:53:31.703054    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-190708" podStartSLOduration=1.703044899 podStartE2EDuration="1.703044899s" podCreationTimestamp="2025-10-19 12:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:53:31.702838827 +0000 UTC m=+1.150276071" watchObservedRunningTime="2025-10-19 12:53:31.703044899 +0000 UTC m=+1.150482137"
	Oct 19 12:53:31 newest-cni-190708 kubelet[1304]: I1019 12:53:31.721320    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-190708" podStartSLOduration=1.721295558 podStartE2EDuration="1.721295558s" podCreationTimestamp="2025-10-19 12:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:53:31.712244178 +0000 UTC m=+1.159681420" watchObservedRunningTime="2025-10-19 12:53:31.721295558 +0000 UTC m=+1.168732775"
	Oct 19 12:53:35 newest-cni-190708 kubelet[1304]: I1019 12:53:35.054131    1304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 19 12:53:35 newest-cni-190708 kubelet[1304]: I1019 12:53:35.054844    1304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 19 12:53:36 newest-cni-190708 kubelet[1304]: I1019 12:53:36.161251    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6r5q\" (UniqueName: \"kubernetes.io/projected/9620c4c3-352a-4d93-8d43-f7a06fcd3374-kube-api-access-t6r5q\") pod \"kube-proxy-v7xgj\" (UID: \"9620c4c3-352a-4d93-8d43-f7a06fcd3374\") " pod="kube-system/kube-proxy-v7xgj"
	Oct 19 12:53:36 newest-cni-190708 kubelet[1304]: I1019 12:53:36.161304    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d-lib-modules\") pod \"kindnet-8bb9r\" (UID: \"eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d\") " pod="kube-system/kindnet-8bb9r"
	Oct 19 12:53:36 newest-cni-190708 kubelet[1304]: I1019 12:53:36.161333    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9620c4c3-352a-4d93-8d43-f7a06fcd3374-kube-proxy\") pod \"kube-proxy-v7xgj\" (UID: \"9620c4c3-352a-4d93-8d43-f7a06fcd3374\") " pod="kube-system/kube-proxy-v7xgj"
	Oct 19 12:53:36 newest-cni-190708 kubelet[1304]: I1019 12:53:36.161354    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9620c4c3-352a-4d93-8d43-f7a06fcd3374-xtables-lock\") pod \"kube-proxy-v7xgj\" (UID: \"9620c4c3-352a-4d93-8d43-f7a06fcd3374\") " pod="kube-system/kube-proxy-v7xgj"
	Oct 19 12:53:36 newest-cni-190708 kubelet[1304]: I1019 12:53:36.161383    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9620c4c3-352a-4d93-8d43-f7a06fcd3374-lib-modules\") pod \"kube-proxy-v7xgj\" (UID: \"9620c4c3-352a-4d93-8d43-f7a06fcd3374\") " pod="kube-system/kube-proxy-v7xgj"
	Oct 19 12:53:36 newest-cni-190708 kubelet[1304]: I1019 12:53:36.161406    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d-cni-cfg\") pod \"kindnet-8bb9r\" (UID: \"eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d\") " pod="kube-system/kindnet-8bb9r"
	Oct 19 12:53:36 newest-cni-190708 kubelet[1304]: I1019 12:53:36.161445    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6vp9\" (UniqueName: \"kubernetes.io/projected/eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d-kube-api-access-k6vp9\") pod \"kindnet-8bb9r\" (UID: \"eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d\") " pod="kube-system/kindnet-8bb9r"
	Oct 19 12:53:36 newest-cni-190708 kubelet[1304]: I1019 12:53:36.161471    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d-xtables-lock\") pod \"kindnet-8bb9r\" (UID: \"eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d\") " pod="kube-system/kindnet-8bb9r"
	Oct 19 12:53:36 newest-cni-190708 kubelet[1304]: I1019 12:53:36.679352    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v7xgj" podStartSLOduration=0.679327792 podStartE2EDuration="679.327792ms" podCreationTimestamp="2025-10-19 12:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:53:36.678980161 +0000 UTC m=+6.126417386" watchObservedRunningTime="2025-10-19 12:53:36.679327792 +0000 UTC m=+6.126765019"
	Oct 19 12:53:36 newest-cni-190708 kubelet[1304]: I1019 12:53:36.689534    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8bb9r" podStartSLOduration=0.68951233 podStartE2EDuration="689.51233ms" podCreationTimestamp="2025-10-19 12:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-19 12:53:36.689472818 +0000 UTC m=+6.136910043" watchObservedRunningTime="2025-10-19 12:53:36.68951233 +0000 UTC m=+6.136949556"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190708 -n newest-cni-190708
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-190708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-kp55x storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-190708 describe pod coredns-66bc5c9577-kp55x storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-190708 describe pod coredns-66bc5c9577-kp55x storage-provisioner: exit status 1 (58.223189ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-kp55x" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-190708 describe pod coredns-66bc5c9577-kp55x storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-123864 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-123864 --alsologtostderr -v=1: exit status 80 (2.248318873s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-123864 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:53:39.503995  676745 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:53:39.504243  676745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:39.504251  676745 out.go:374] Setting ErrFile to fd 2...
	I1019 12:53:39.504254  676745 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:39.504494  676745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:53:39.504733  676745 out.go:368] Setting JSON to false
	I1019 12:53:39.504779  676745 mustload.go:65] Loading cluster: embed-certs-123864
	I1019 12:53:39.505105  676745 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:39.505525  676745 cli_runner.go:164] Run: docker container inspect embed-certs-123864 --format={{.State.Status}}
	I1019 12:53:39.525309  676745 host.go:66] Checking if "embed-certs-123864" exists ...
	I1019 12:53:39.525652  676745 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:39.584159  676745 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-19 12:53:39.572834917 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:39.585067  676745 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-123864 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 12:53:39.587001  676745 out.go:179] * Pausing node embed-certs-123864 ... 
	I1019 12:53:39.588238  676745 host.go:66] Checking if "embed-certs-123864" exists ...
	I1019 12:53:39.588564  676745 ssh_runner.go:195] Run: systemctl --version
	I1019 12:53:39.588605  676745 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-123864
	I1019 12:53:39.607177  676745 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/embed-certs-123864/id_rsa Username:docker}
	I1019 12:53:39.702165  676745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:39.714181  676745 pause.go:52] kubelet running: true
	I1019 12:53:39.714266  676745 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:39.873860  676745 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:39.873973  676745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:39.947662  676745 cri.go:89] found id: "120f5bcceb6a3b5688f01e27d335bead98c322d2007e7d8ca8429a1a4fd15394"
	I1019 12:53:39.947687  676745 cri.go:89] found id: "5d92a5a60399ff61af8aa305455b29363b439912ce116e9b8a33058d2d2f8903"
	I1019 12:53:39.947691  676745 cri.go:89] found id: "0bc1ee77f0b5e034f70aae53c104ca5c85bb5db4d83c9b4db7e7ac9e13cfffb0"
	I1019 12:53:39.947694  676745 cri.go:89] found id: "b5ad804329727e632f091f904fd14b6edbd537247928aea461b7f33073a5f96e"
	I1019 12:53:39.947697  676745 cri.go:89] found id: "6db88a089aeb9f19d418320370a192296cab04bf8fa4ea3cf27af48515e8871c"
	I1019 12:53:39.947701  676745 cri.go:89] found id: "0d6bd37e74ce4fd54de1cf8e27fcb93f0da4eae636f80ecf509c242bba0ab6b4"
	I1019 12:53:39.947703  676745 cri.go:89] found id: "2948778c0277b5d716b5581d32565f17755bd979469128c13d911b54b47927ea"
	I1019 12:53:39.947706  676745 cri.go:89] found id: "f0fd8fcb3c6d87abb5a73bdbe32675387cdf9b39fb23cc80e3f9fcee156b57fc"
	I1019 12:53:39.947708  676745 cri.go:89] found id: "ce30ef8a95f35deb3f080b7ea813df6a93693594ac7959d6e3a0b79159f36e25"
	I1019 12:53:39.947713  676745 cri.go:89] found id: "a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4"
	I1019 12:53:39.947716  676745 cri.go:89] found id: "60dc588bc47f0889522b49eb992e43c19d34cefe4a48f5c81a8b0e95a7f16696"
	I1019 12:53:39.947718  676745 cri.go:89] found id: ""
	I1019 12:53:39.947755  676745 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:39.960147  676745 retry.go:31] will retry after 187.736528ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:39Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:53:40.148519  676745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:40.161853  676745 pause.go:52] kubelet running: false
	I1019 12:53:40.161909  676745 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:40.305382  676745 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:40.305481  676745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:40.371572  676745 cri.go:89] found id: "120f5bcceb6a3b5688f01e27d335bead98c322d2007e7d8ca8429a1a4fd15394"
	I1019 12:53:40.371607  676745 cri.go:89] found id: "5d92a5a60399ff61af8aa305455b29363b439912ce116e9b8a33058d2d2f8903"
	I1019 12:53:40.371613  676745 cri.go:89] found id: "0bc1ee77f0b5e034f70aae53c104ca5c85bb5db4d83c9b4db7e7ac9e13cfffb0"
	I1019 12:53:40.371617  676745 cri.go:89] found id: "b5ad804329727e632f091f904fd14b6edbd537247928aea461b7f33073a5f96e"
	I1019 12:53:40.371621  676745 cri.go:89] found id: "6db88a089aeb9f19d418320370a192296cab04bf8fa4ea3cf27af48515e8871c"
	I1019 12:53:40.371626  676745 cri.go:89] found id: "0d6bd37e74ce4fd54de1cf8e27fcb93f0da4eae636f80ecf509c242bba0ab6b4"
	I1019 12:53:40.371630  676745 cri.go:89] found id: "2948778c0277b5d716b5581d32565f17755bd979469128c13d911b54b47927ea"
	I1019 12:53:40.371634  676745 cri.go:89] found id: "f0fd8fcb3c6d87abb5a73bdbe32675387cdf9b39fb23cc80e3f9fcee156b57fc"
	I1019 12:53:40.371638  676745 cri.go:89] found id: "ce30ef8a95f35deb3f080b7ea813df6a93693594ac7959d6e3a0b79159f36e25"
	I1019 12:53:40.371646  676745 cri.go:89] found id: "a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4"
	I1019 12:53:40.371650  676745 cri.go:89] found id: "60dc588bc47f0889522b49eb992e43c19d34cefe4a48f5c81a8b0e95a7f16696"
	I1019 12:53:40.371654  676745 cri.go:89] found id: ""
	I1019 12:53:40.371721  676745 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:40.383355  676745 retry.go:31] will retry after 299.307436ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:40Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:53:40.683615  676745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:40.696517  676745 pause.go:52] kubelet running: false
	I1019 12:53:40.696600  676745 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:40.828291  676745 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:40.828387  676745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:40.894208  676745 cri.go:89] found id: "120f5bcceb6a3b5688f01e27d335bead98c322d2007e7d8ca8429a1a4fd15394"
	I1019 12:53:40.894240  676745 cri.go:89] found id: "5d92a5a60399ff61af8aa305455b29363b439912ce116e9b8a33058d2d2f8903"
	I1019 12:53:40.894246  676745 cri.go:89] found id: "0bc1ee77f0b5e034f70aae53c104ca5c85bb5db4d83c9b4db7e7ac9e13cfffb0"
	I1019 12:53:40.894250  676745 cri.go:89] found id: "b5ad804329727e632f091f904fd14b6edbd537247928aea461b7f33073a5f96e"
	I1019 12:53:40.894254  676745 cri.go:89] found id: "6db88a089aeb9f19d418320370a192296cab04bf8fa4ea3cf27af48515e8871c"
	I1019 12:53:40.894258  676745 cri.go:89] found id: "0d6bd37e74ce4fd54de1cf8e27fcb93f0da4eae636f80ecf509c242bba0ab6b4"
	I1019 12:53:40.894262  676745 cri.go:89] found id: "2948778c0277b5d716b5581d32565f17755bd979469128c13d911b54b47927ea"
	I1019 12:53:40.894266  676745 cri.go:89] found id: "f0fd8fcb3c6d87abb5a73bdbe32675387cdf9b39fb23cc80e3f9fcee156b57fc"
	I1019 12:53:40.894269  676745 cri.go:89] found id: "ce30ef8a95f35deb3f080b7ea813df6a93693594ac7959d6e3a0b79159f36e25"
	I1019 12:53:40.894279  676745 cri.go:89] found id: "a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4"
	I1019 12:53:40.894283  676745 cri.go:89] found id: "60dc588bc47f0889522b49eb992e43c19d34cefe4a48f5c81a8b0e95a7f16696"
	I1019 12:53:40.894288  676745 cri.go:89] found id: ""
	I1019 12:53:40.894334  676745 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:40.906141  676745 retry.go:31] will retry after 554.646599ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:40Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:53:41.461639  676745 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:41.474520  676745 pause.go:52] kubelet running: false
	I1019 12:53:41.474591  676745 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:41.613776  676745 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:41.613841  676745 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:41.681262  676745 cri.go:89] found id: "120f5bcceb6a3b5688f01e27d335bead98c322d2007e7d8ca8429a1a4fd15394"
	I1019 12:53:41.681285  676745 cri.go:89] found id: "5d92a5a60399ff61af8aa305455b29363b439912ce116e9b8a33058d2d2f8903"
	I1019 12:53:41.681289  676745 cri.go:89] found id: "0bc1ee77f0b5e034f70aae53c104ca5c85bb5db4d83c9b4db7e7ac9e13cfffb0"
	I1019 12:53:41.681293  676745 cri.go:89] found id: "b5ad804329727e632f091f904fd14b6edbd537247928aea461b7f33073a5f96e"
	I1019 12:53:41.681295  676745 cri.go:89] found id: "6db88a089aeb9f19d418320370a192296cab04bf8fa4ea3cf27af48515e8871c"
	I1019 12:53:41.681299  676745 cri.go:89] found id: "0d6bd37e74ce4fd54de1cf8e27fcb93f0da4eae636f80ecf509c242bba0ab6b4"
	I1019 12:53:41.681301  676745 cri.go:89] found id: "2948778c0277b5d716b5581d32565f17755bd979469128c13d911b54b47927ea"
	I1019 12:53:41.681304  676745 cri.go:89] found id: "f0fd8fcb3c6d87abb5a73bdbe32675387cdf9b39fb23cc80e3f9fcee156b57fc"
	I1019 12:53:41.681307  676745 cri.go:89] found id: "ce30ef8a95f35deb3f080b7ea813df6a93693594ac7959d6e3a0b79159f36e25"
	I1019 12:53:41.681313  676745 cri.go:89] found id: "a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4"
	I1019 12:53:41.681316  676745 cri.go:89] found id: "60dc588bc47f0889522b49eb992e43c19d34cefe4a48f5c81a8b0e95a7f16696"
	I1019 12:53:41.681318  676745 cri.go:89] found id: ""
	I1019 12:53:41.681355  676745 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:41.696315  676745 out.go:203] 
	W1019 12:53:41.697597  676745 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:53:41.697612  676745 out.go:285] * 
	* 
	W1019 12:53:41.702181  676745 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:53:41.703656  676745 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-123864 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-123864
helpers_test.go:243: (dbg) docker inspect embed-certs-123864:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509",
	        "Created": "2025-10-19T12:51:12.601870775Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 663721,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:52:44.306581522Z",
	            "FinishedAt": "2025-10-19T12:52:43.47687446Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509/hostname",
	        "HostsPath": "/var/lib/docker/containers/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509/hosts",
	        "LogPath": "/var/lib/docker/containers/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509-json.log",
	        "Name": "/embed-certs-123864",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-123864:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-123864",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509",
	                "LowerDir": "/var/lib/docker/overlay2/a47111221e0d12e9bca77267d9c1c9e4f1c802b0874f893ca4a091ad9fba6418-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a47111221e0d12e9bca77267d9c1c9e4f1c802b0874f893ca4a091ad9fba6418/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a47111221e0d12e9bca77267d9c1c9e4f1c802b0874f893ca4a091ad9fba6418/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a47111221e0d12e9bca77267d9c1c9e4f1c802b0874f893ca4a091ad9fba6418/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-123864",
	                "Source": "/var/lib/docker/volumes/embed-certs-123864/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-123864",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-123864",
	                "name.minikube.sigs.k8s.io": "embed-certs-123864",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "660fe739191fecd6c47c82610de0ce6eac5d5ed9d24e3f1c9f8c36072b6b1198",
	            "SandboxKey": "/var/run/docker/netns/660fe739191f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-123864": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:4f:ea:d8:58:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fcd0a3e89589b9fe587e991244f1cb1f39b034b86cfecd1e038afdfb125c5bb4",
	                    "EndpointID": "20d2d8872ec6038fe37933db85098208fa811c52be7122f11de7f90e4e687439",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-123864",
	                        "53e8a5bc9e53"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-123864 -n embed-certs-123864
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-123864 -n embed-certs-123864: exit status 2 (310.312884ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-123864 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-123864 logs -n 25: (1.081967016s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-577062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p no-preload-561408 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p embed-certs-123864 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-999693 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-123864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-999693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ old-k8s-version-577062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p old-k8s-version-577062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ no-preload-561408 image list --format=json                                                                                                                                                                                                    │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p no-preload-561408 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ start   │ -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-190708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ stop    │ -p newest-cni-190708 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ embed-certs-123864 image list --format=json                                                                                                                                                                                                   │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p embed-certs-123864 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:53:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:53:11.615027  672737 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:53:11.615299  672737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:11.615311  672737 out.go:374] Setting ErrFile to fd 2...
	I1019 12:53:11.615315  672737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:11.615551  672737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:53:11.616038  672737 out.go:368] Setting JSON to false
	I1019 12:53:11.617746  672737 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9340,"bootTime":1760869052,"procs":566,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:53:11.617846  672737 start.go:141] virtualization: kvm guest
	I1019 12:53:11.619915  672737 out.go:179] * [newest-cni-190708] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:53:11.621699  672737 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:53:11.621736  672737 notify.go:220] Checking for updates...
	I1019 12:53:11.624129  672737 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:53:11.626246  672737 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:53:11.627453  672737 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:53:11.628681  672737 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:53:11.629995  672737 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:53:11.631642  672737 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:11.631786  672737 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:11.631990  672737 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:53:11.658136  672737 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:53:11.658233  672737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:11.722933  672737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-19 12:53:11.711540262 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:11.723046  672737 docker.go:318] overlay module found
	I1019 12:53:11.724874  672737 out.go:179] * Using the docker driver based on user configuration
	I1019 12:53:11.726372  672737 start.go:305] selected driver: docker
	I1019 12:53:11.726394  672737 start.go:925] validating driver "docker" against <nil>
	I1019 12:53:11.726412  672737 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:53:11.727020  672737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:11.787909  672737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-19 12:53:11.778156597 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:11.788107  672737 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1019 12:53:11.788149  672737 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1019 12:53:11.788529  672737 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:53:11.790331  672737 out.go:179] * Using Docker driver with root privileges
	I1019 12:53:11.791430  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:11.791511  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:11.791528  672737 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:53:11.791587  672737 start.go:349] cluster config:
	{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:11.792873  672737 out.go:179] * Starting "newest-cni-190708" primary control-plane node in "newest-cni-190708" cluster
	I1019 12:53:11.794127  672737 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:53:11.795216  672737 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:53:11.796409  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:11.796465  672737 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:53:11.796477  672737 cache.go:58] Caching tarball of preloaded images
	I1019 12:53:11.796486  672737 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:53:11.796551  672737 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:53:11.796562  672737 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:53:11.796649  672737 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:11.796666  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json: {Name:mk458b42b0f9f21f6e5af311f76e8caf9c4c5efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:11.816881  672737 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:53:11.816898  672737 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:53:11.816920  672737 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:53:11.816943  672737 start.go:360] acquireMachinesLock for newest-cni-190708: {Name:mk77ff67117e187a78edba04cd47af082236de6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:53:11.817032  672737 start.go:364] duration metric: took 74.015µs to acquireMachinesLock for "newest-cni-190708"
	I1019 12:53:11.817054  672737 start.go:93] Provisioning new machine with config: &{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:53:11.817117  672737 start.go:125] createHost starting for "" (driver="docker")
	W1019 12:53:09.146473  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:11.146837  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:10.296323  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:12.795707  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:11.818963  672737 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 12:53:11.819197  672737 start.go:159] libmachine.API.Create for "newest-cni-190708" (driver="docker")
	I1019 12:53:11.819227  672737 client.go:168] LocalClient.Create starting
	I1019 12:53:11.819287  672737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem
	I1019 12:53:11.819320  672737 main.go:141] libmachine: Decoding PEM data...
	I1019 12:53:11.819338  672737 main.go:141] libmachine: Parsing certificate...
	I1019 12:53:11.819384  672737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem
	I1019 12:53:11.819402  672737 main.go:141] libmachine: Decoding PEM data...
	I1019 12:53:11.819412  672737 main.go:141] libmachine: Parsing certificate...
	I1019 12:53:11.819803  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:53:11.837346  672737 cli_runner.go:211] docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:53:11.837404  672737 network_create.go:284] running [docker network inspect newest-cni-190708] to gather additional debugging logs...
	I1019 12:53:11.837466  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708
	W1019 12:53:11.853768  672737 cli_runner.go:211] docker network inspect newest-cni-190708 returned with exit code 1
	I1019 12:53:11.853794  672737 network_create.go:287] error running [docker network inspect newest-cni-190708]: docker network inspect newest-cni-190708: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-190708 not found
	I1019 12:53:11.853806  672737 network_create.go:289] output of [docker network inspect newest-cni-190708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-190708 not found
	
	** /stderr **
	I1019 12:53:11.853902  672737 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:53:11.872131  672737 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a4629926c406 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:8c:3f:62:13:f6} reservation:<nil>}
	I1019 12:53:11.872777  672737 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6cccd776798e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:1b:39:ab:6e:7b} reservation:<nil>}
	I1019 12:53:11.873176  672737 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-91914a6ce07e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:1c:aa:a8:a4:4a} reservation:<nil>}
	I1019 12:53:11.873710  672737 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fcd0a3e89589 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:54:90:aa:5c:46} reservation:<nil>}
	I1019 12:53:11.874346  672737 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-de90530a2892 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f2:1b:d3:5b:94:95} reservation:<nil>}
	I1019 12:53:11.875186  672737 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e7d700}
	I1019 12:53:11.875210  672737 network_create.go:124] attempt to create docker network newest-cni-190708 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1019 12:53:11.875256  672737 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-190708 newest-cni-190708
	I1019 12:53:11.933015  672737 network_create.go:108] docker network newest-cni-190708 192.168.94.0/24 created
	I1019 12:53:11.933049  672737 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-190708" container
	I1019 12:53:11.933120  672737 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:53:11.950774  672737 cli_runner.go:164] Run: docker volume create newest-cni-190708 --label name.minikube.sigs.k8s.io=newest-cni-190708 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:53:11.967572  672737 oci.go:103] Successfully created a docker volume newest-cni-190708
	I1019 12:53:11.967650  672737 cli_runner.go:164] Run: docker run --rm --name newest-cni-190708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-190708 --entrypoint /usr/bin/test -v newest-cni-190708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:53:12.367353  672737 oci.go:107] Successfully prepared a docker volume newest-cni-190708
	I1019 12:53:12.367407  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:12.367450  672737 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:53:12.367533  672737 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-190708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 12:53:13.646716  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:15.646757  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:15.295646  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:17.297846  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:16.825912  672737 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-190708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.458335671s)
	I1019 12:53:16.825946  672737 kic.go:203] duration metric: took 4.45849341s to extract preloaded images to volume ...
	W1019 12:53:16.826042  672737 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 12:53:16.826073  672737 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 12:53:16.826110  672737 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 12:53:16.883735  672737 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-190708 --name newest-cni-190708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-190708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-190708 --network newest-cni-190708 --ip 192.168.94.2 --volume newest-cni-190708:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 12:53:17.149721  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Running}}
	I1019 12:53:17.168092  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.187070  672737 cli_runner.go:164] Run: docker exec newest-cni-190708 stat /var/lib/dpkg/alternatives/iptables
	I1019 12:53:17.235594  672737 oci.go:144] the created container "newest-cni-190708" has a running status.
	I1019 12:53:17.235624  672737 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa...
	I1019 12:53:17.641114  672737 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 12:53:17.666983  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.686164  672737 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 12:53:17.686197  672737 kic_runner.go:114] Args: [docker exec --privileged newest-cni-190708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 12:53:17.730607  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.748800  672737 machine.go:93] provisionDockerMachine start ...
	I1019 12:53:17.748886  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:17.768809  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:17.769043  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:17.769056  672737 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:53:17.904434  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:53:17.904466  672737 ubuntu.go:182] provisioning hostname "newest-cni-190708"
	I1019 12:53:17.904532  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:17.923140  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:17.923351  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:17.923364  672737 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-190708 && echo "newest-cni-190708" | sudo tee /etc/hostname
	I1019 12:53:18.066330  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:53:18.066401  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.084720  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:18.084937  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:18.084955  672737 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-190708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-190708/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-190708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:53:18.218215  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:53:18.218243  672737 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:53:18.218295  672737 ubuntu.go:190] setting up certificates
	I1019 12:53:18.218310  672737 provision.go:84] configureAuth start
	I1019 12:53:18.218377  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:18.236696  672737 provision.go:143] copyHostCerts
	I1019 12:53:18.236757  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:53:18.236768  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:53:18.236836  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:53:18.236929  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:53:18.236938  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:53:18.236966  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:53:18.237022  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:53:18.237030  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:53:18.237052  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:53:18.237101  672737 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.newest-cni-190708 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-190708]
	I1019 12:53:18.349002  672737 provision.go:177] copyRemoteCerts
	I1019 12:53:18.349061  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:53:18.349100  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.367380  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:18.464934  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:53:18.484736  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:53:18.502418  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 12:53:18.520374  672737 provision.go:87] duration metric: took 302.043863ms to configureAuth
	I1019 12:53:18.520411  672737 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:53:18.520616  672737 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:18.520715  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.539107  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:18.539337  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:18.539356  672737 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:53:18.783336  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:53:18.783368  672737 machine.go:96] duration metric: took 1.034543859s to provisionDockerMachine
	I1019 12:53:18.783380  672737 client.go:171] duration metric: took 6.964145323s to LocalClient.Create
	I1019 12:53:18.783403  672737 start.go:167] duration metric: took 6.964207211s to libmachine.API.Create "newest-cni-190708"
	I1019 12:53:18.783410  672737 start.go:293] postStartSetup for "newest-cni-190708" (driver="docker")
	I1019 12:53:18.783444  672737 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:53:18.783533  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:53:18.783575  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.802276  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:18.904329  672737 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:53:18.908177  672737 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:53:18.908210  672737 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:53:18.908222  672737 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:53:18.908267  672737 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:53:18.908346  672737 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:53:18.908470  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:53:18.916278  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:53:18.940533  672737 start.go:296] duration metric: took 157.106831ms for postStartSetup
	I1019 12:53:18.940837  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:18.959008  672737 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:18.959254  672737 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:53:18.959294  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.976265  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.069698  672737 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:53:19.074565  672737 start.go:128] duration metric: took 7.257430988s to createHost
	I1019 12:53:19.074635  672737 start.go:83] releasing machines lock for "newest-cni-190708", held for 7.257591431s
	I1019 12:53:19.074702  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:19.092846  672737 ssh_runner.go:195] Run: cat /version.json
	I1019 12:53:19.092896  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:19.092920  672737 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:53:19.092980  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:19.112049  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.112296  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.259186  672737 ssh_runner.go:195] Run: systemctl --version
	I1019 12:53:19.265848  672737 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:53:19.301474  672737 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:53:19.306225  672737 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:53:19.306297  672737 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:53:19.331979  672737 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 12:53:19.332008  672737 start.go:495] detecting cgroup driver to use...
	I1019 12:53:19.332048  672737 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:53:19.332111  672737 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:53:19.348084  672737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:53:19.360773  672737 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:53:19.360844  672737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:53:19.377948  672737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:53:19.395822  672737 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:53:19.484678  672737 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:53:19.575544  672737 docker.go:234] disabling docker service ...
	I1019 12:53:19.575618  672737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:53:19.595378  672737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:53:19.608092  672737 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:53:19.693958  672737 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:53:19.776371  672737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:53:19.789375  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:53:19.804627  672737 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:53:19.804704  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.814787  672737 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:53:19.814837  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.823551  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.832169  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.840784  672737 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:53:19.848724  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.857100  672737 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.870352  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.878731  672737 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:53:19.886348  672737 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:53:19.893759  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:19.973321  672737 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:53:20.077881  672737 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:53:20.077979  672737 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:53:20.082037  672737 start.go:563] Will wait 60s for crictl version
	I1019 12:53:20.082093  672737 ssh_runner.go:195] Run: which crictl
	I1019 12:53:20.085569  672737 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:53:20.109837  672737 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:53:20.109920  672737 ssh_runner.go:195] Run: crio --version
	I1019 12:53:20.138350  672737 ssh_runner.go:195] Run: crio --version
	I1019 12:53:20.168482  672737 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:53:20.169863  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:53:20.188025  672737 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1019 12:53:20.192265  672737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:53:20.203815  672737 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 12:53:20.205047  672737 kubeadm.go:883] updating cluster {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:53:20.205149  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:20.205199  672737 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:53:20.236514  672737 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:53:20.236536  672737 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:53:20.236581  672737 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:53:20.262051  672737 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:53:20.262073  672737 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:53:20.262080  672737 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1019 12:53:20.262171  672737 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-190708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:53:20.262247  672737 ssh_runner.go:195] Run: crio config
	I1019 12:53:20.309916  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:20.309950  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:20.309973  672737 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 12:53:20.310003  672737 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190708 NodeName:newest-cni-190708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:53:20.310145  672737 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-190708"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:53:20.310214  672737 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:53:20.318657  672737 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:53:20.318731  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:53:20.326554  672737 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 12:53:20.339030  672737 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:53:20.354155  672737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1019 12:53:20.366696  672737 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:53:20.370356  672737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:53:20.380455  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:20.458942  672737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:53:20.485015  672737 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708 for IP: 192.168.94.2
	I1019 12:53:20.485043  672737 certs.go:195] generating shared ca certs ...
	I1019 12:53:20.485070  672737 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.485221  672737 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:53:20.485264  672737 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:53:20.485275  672737 certs.go:257] generating profile certs ...
	I1019 12:53:20.485328  672737 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key
	I1019 12:53:20.485348  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt with IP's: []
	I1019 12:53:20.585551  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt ...
	I1019 12:53:20.585580  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt: {Name:mk5251db26990dc5997b9e5853758832f57cf196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.585769  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key ...
	I1019 12:53:20.585781  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key: {Name:mk05802bac0f3e5b3a8b334617d45fe07eee0068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.585867  672737 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd
	I1019 12:53:20.585883  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1019 12:53:20.684366  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd ...
	I1019 12:53:20.684395  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd: {Name:mk395ac2723daa6eac9a1a5448aa56dcc3dae795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.684562  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd ...
	I1019 12:53:20.684576  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd: {Name:mk1d126d0c5513551abbae58673dc597e26ffe4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.684650  672737 certs.go:382] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt
	I1019 12:53:20.684722  672737 certs.go:386] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key
	I1019 12:53:20.684776  672737 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key
	I1019 12:53:20.684791  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt with IP's: []
	I1019 12:53:20.821306  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt ...
	I1019 12:53:20.821336  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt: {Name:mkf04fb8bbf161179ae86ba91d4a80f873fae21e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.821524  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key ...
	I1019 12:53:20.821544  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key: {Name:mk22ac123e8932e8db98bd277997b637ec873079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.821743  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:53:20.821779  672737 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:53:20.821789  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:53:20.821812  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:53:20.821834  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:53:20.821860  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:53:20.821901  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:53:20.822529  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:53:20.843244  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:53:20.860464  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:53:20.877640  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:53:20.895480  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 12:53:20.912797  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:53:20.929757  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:53:20.947521  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:53:20.964869  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:53:20.984248  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:53:21.003061  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:53:21.020532  672737 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:53:21.033435  672737 ssh_runner.go:195] Run: openssl version
	I1019 12:53:21.040056  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:53:21.049001  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.052716  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.052781  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.088149  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:53:21.097154  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:53:21.105495  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.109154  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.109216  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.144296  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:53:21.153347  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:53:21.161940  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.165605  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.165655  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.199345  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:53:21.208215  672737 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:53:21.212056  672737 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 12:53:21.212119  672737 kubeadm.go:400] StartCluster: {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:21.212215  672737 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:53:21.212265  672737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:53:21.240234  672737 cri.go:89] found id: ""
	I1019 12:53:21.240301  672737 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:53:21.248582  672737 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:53:21.256728  672737 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 12:53:21.256801  672737 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:53:21.265096  672737 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 12:53:21.265135  672737 kubeadm.go:157] found existing configuration files:
	
	I1019 12:53:21.265192  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 12:53:21.273544  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 12:53:21.273612  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 12:53:21.282090  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 12:53:21.290396  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 12:53:21.290490  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:53:21.300201  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 12:53:21.308252  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 12:53:21.308306  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:53:21.315749  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 12:53:21.323167  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 12:53:21.323239  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:53:21.330315  672737 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 12:53:21.369107  672737 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 12:53:21.369180  672737 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 12:53:21.390319  672737 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 12:53:21.390379  672737 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 12:53:21.390409  672737 kubeadm.go:318] OS: Linux
	I1019 12:53:21.390480  672737 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 12:53:21.390540  672737 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 12:53:21.390652  672737 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 12:53:21.390735  672737 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 12:53:21.390790  672737 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 12:53:21.390890  672737 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 12:53:21.390973  672737 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 12:53:21.391026  672737 kubeadm.go:318] CGROUPS_IO: enabled
	I1019 12:53:21.449690  672737 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 12:53:21.449859  672737 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 12:53:21.449988  672737 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 12:53:21.458017  672737 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 12:53:21.459979  672737 out.go:252]   - Generating certificates and keys ...
	I1019 12:53:21.460084  672737 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 12:53:21.460184  672737 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1019 12:53:17.646821  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:19.647689  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:19.795394  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:21.795584  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:23.796166  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:21.782609  672737 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 12:53:22.004817  672737 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 12:53:22.154911  672737 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 12:53:22.730145  672737 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 12:53:22.932723  672737 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 12:53:22.932904  672737 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-190708] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 12:53:23.243959  672737 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 12:53:23.244120  672737 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-190708] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 12:53:23.410854  672737 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 12:53:23.472366  672737 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 12:53:23.643869  672737 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 12:53:23.644033  672737 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 12:53:23.711987  672737 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 12:53:24.037993  672737 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 12:53:24.501726  672737 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 12:53:24.744523  672737 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 12:53:24.859147  672737 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 12:53:24.859688  672737 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 12:53:24.863264  672737 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 12:53:24.864642  672737 out.go:252]   - Booting up control plane ...
	I1019 12:53:24.864730  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 12:53:24.864796  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 12:53:24.865498  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 12:53:24.879079  672737 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 12:53:24.879207  672737 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 12:53:24.886821  672737 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 12:53:24.887101  672737 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 12:53:24.887199  672737 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 12:53:24.983491  672737 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 12:53:24.983708  672737 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 12:53:25.984614  672737 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001307224s
	I1019 12:53:25.988599  672737 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 12:53:25.988724  672737 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1019 12:53:25.988848  672737 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 12:53:25.988960  672737 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1019 12:53:22.146944  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:24.647501  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:26.295683  663517 pod_ready.go:94] pod "coredns-66bc5c9577-bw9l4" is "Ready"
	I1019 12:53:26.295713  663517 pod_ready.go:86] duration metric: took 31.505627238s for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.297917  663517 pod_ready.go:83] waiting for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.301953  663517 pod_ready.go:94] pod "etcd-embed-certs-123864" is "Ready"
	I1019 12:53:26.301978  663517 pod_ready.go:86] duration metric: took 4.035262ms for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.304112  663517 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.308120  663517 pod_ready.go:94] pod "kube-apiserver-embed-certs-123864" is "Ready"
	I1019 12:53:26.308144  663517 pod_ready.go:86] duration metric: took 4.009533ms for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.309999  663517 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.494192  663517 pod_ready.go:94] pod "kube-controller-manager-embed-certs-123864" is "Ready"
	I1019 12:53:26.494219  663517 pod_ready.go:86] duration metric: took 184.199033ms for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.694487  663517 pod_ready.go:83] waiting for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.094397  663517 pod_ready.go:94] pod "kube-proxy-gvrcz" is "Ready"
	I1019 12:53:27.094457  663517 pod_ready.go:86] duration metric: took 399.93585ms for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.293675  663517 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.694119  663517 pod_ready.go:94] pod "kube-scheduler-embed-certs-123864" is "Ready"
	I1019 12:53:27.694146  663517 pod_ready.go:86] duration metric: took 400.447048ms for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.694158  663517 pod_ready.go:40] duration metric: took 32.912525222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:53:27.746279  663517 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:27.748237  663517 out.go:179] * Done! kubectl is now configured to use "embed-certs-123864" cluster and "default" namespace by default
	I1019 12:53:27.518915  672737 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.530228054s
	I1019 12:53:28.053793  672737 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.061152071s
	I1019 12:53:29.990081  672737 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001429284s
	I1019 12:53:30.001867  672737 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 12:53:30.014037  672737 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 12:53:30.024140  672737 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 12:53:30.024456  672737 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-190708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 12:53:30.033264  672737 kubeadm.go:318] [bootstrap-token] Using token: gtkds1.9e0h8pmw5r5mqwja
	I1019 12:53:30.034587  672737 out.go:252]   - Configuring RBAC rules ...
	I1019 12:53:30.034754  672737 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 12:53:30.038773  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 12:53:30.045039  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 12:53:30.049009  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 12:53:30.052044  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 12:53:30.054665  672737 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 12:53:30.397490  672737 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 12:53:30.827821  672737 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 12:53:31.396481  672737 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 12:53:31.397310  672737 kubeadm.go:318] 
	I1019 12:53:31.397402  672737 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 12:53:31.397413  672737 kubeadm.go:318] 
	I1019 12:53:31.397551  672737 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 12:53:31.397565  672737 kubeadm.go:318] 
	I1019 12:53:31.397596  672737 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 12:53:31.397650  672737 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 12:53:31.397698  672737 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 12:53:31.397705  672737 kubeadm.go:318] 
	I1019 12:53:31.397749  672737 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 12:53:31.397755  672737 kubeadm.go:318] 
	I1019 12:53:31.397794  672737 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 12:53:31.397800  672737 kubeadm.go:318] 
	I1019 12:53:31.397861  672737 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 12:53:31.397953  672737 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 12:53:31.398040  672737 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 12:53:31.398051  672737 kubeadm.go:318] 
	I1019 12:53:31.398140  672737 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 12:53:31.398207  672737 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 12:53:31.398213  672737 kubeadm.go:318] 
	I1019 12:53:31.398292  672737 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token gtkds1.9e0h8pmw5r5mqwja \
	I1019 12:53:31.398378  672737 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 \
	I1019 12:53:31.398399  672737 kubeadm.go:318] 	--control-plane 
	I1019 12:53:31.398405  672737 kubeadm.go:318] 
	I1019 12:53:31.398523  672737 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 12:53:31.398534  672737 kubeadm.go:318] 
	I1019 12:53:31.398627  672737 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token gtkds1.9e0h8pmw5r5mqwja \
	I1019 12:53:31.398790  672737 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 
	I1019 12:53:31.401824  672737 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 12:53:31.402002  672737 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 12:53:31.402023  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:31.402032  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:31.403960  672737 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 12:53:31.405314  672737 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 12:53:31.410474  672737 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 12:53:31.410496  672737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 12:53:31.424273  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1019 12:53:27.147074  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:29.645647  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:31.646857  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:31.641912  672737 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:53:31.642008  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:31.642011  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-190708 minikube.k8s.io/updated_at=2025_10_19T12_53_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=newest-cni-190708 minikube.k8s.io/primary=true
	I1019 12:53:31.652529  672737 ops.go:34] apiserver oom_adj: -16
	I1019 12:53:31.718996  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:32.219629  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:32.719834  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:33.219813  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:33.719692  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:34.219076  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:34.719433  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.219917  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.719034  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.785029  672737 kubeadm.go:1113] duration metric: took 4.143080971s to wait for elevateKubeSystemPrivileges
	I1019 12:53:35.785068  672737 kubeadm.go:402] duration metric: took 14.57295181s to StartCluster
	I1019 12:53:35.785101  672737 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:35.785174  672737 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:53:35.787497  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:35.787794  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:53:35.787820  672737 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:53:35.787897  672737 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:53:35.787993  672737 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-190708"
	I1019 12:53:35.788017  672737 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-190708"
	I1019 12:53:35.788020  672737 addons.go:69] Setting default-storageclass=true in profile "newest-cni-190708"
	I1019 12:53:35.788053  672737 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-190708"
	I1019 12:53:35.788062  672737 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:35.788057  672737 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:53:35.788500  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.788555  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.789512  672737 out.go:179] * Verifying Kubernetes components...
	I1019 12:53:35.791378  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:35.812380  672737 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1019 12:53:33.646988  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:34.648076  664256 pod_ready.go:94] pod "coredns-66bc5c9577-hftjp" is "Ready"
	I1019 12:53:34.648104  664256 pod_ready.go:86] duration metric: took 36.507165259s for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.650741  664256 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.654523  664256 pod_ready.go:94] pod "etcd-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.654547  664256 pod_ready.go:86] duration metric: took 3.785206ms for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.656429  664256 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.660685  664256 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.660712  664256 pod_ready.go:86] duration metric: took 4.258461ms for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.662348  664256 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.844857  664256 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.844886  664256 pod_ready.go:86] duration metric: took 182.521582ms for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.044783  664256 pod_ready.go:83] waiting for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.445005  664256 pod_ready.go:94] pod "kube-proxy-cjxjt" is "Ready"
	I1019 12:53:35.445031  664256 pod_ready.go:86] duration metric: took 400.222332ms for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.645060  664256 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:36.045246  664256 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:36.045282  664256 pod_ready.go:86] duration metric: took 400.190569ms for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:36.045298  664256 pod_ready.go:40] duration metric: took 37.908676389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:53:36.105764  664256 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:36.108299  664256 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-999693" cluster and "default" namespace by default
	I1019 12:53:35.813186  672737 addons.go:238] Setting addon default-storageclass=true in "newest-cni-190708"
	I1019 12:53:35.813237  672737 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:53:35.813735  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.815209  672737 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:53:35.815225  672737 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:53:35.815282  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:35.843451  672737 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:53:35.843479  672737 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:53:35.843567  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:35.844218  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:35.868726  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:35.877614  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:53:35.929249  672737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:53:35.955142  672737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:53:35.988275  672737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:53:36.052147  672737 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1019 12:53:36.053790  672737 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:53:36.053847  672737 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:53:36.305744  672737 api_server.go:72] duration metric: took 517.881771ms to wait for apiserver process to appear ...
	I1019 12:53:36.305769  672737 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:53:36.305790  672737 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:53:36.310834  672737 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1019 12:53:36.311767  672737 api_server.go:141] control plane version: v1.34.1
	I1019 12:53:36.311798  672737 api_server.go:131] duration metric: took 6.020737ms to wait for apiserver health ...
	I1019 12:53:36.311809  672737 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:53:36.313872  672737 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 12:53:36.314880  672737 system_pods.go:59] 8 kube-system pods found
	I1019 12:53:36.314917  672737 system_pods.go:61] "coredns-66bc5c9577-kp55x" [9a472ee8-8fcb-410c-92d0-6f82b4bacad7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:53:36.314933  672737 system_pods.go:61] "etcd-newest-cni-190708" [2105393f-0676-49e0-aa1c-5efd62f5148c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:53:36.314945  672737 system_pods.go:61] "kindnet-8bb9r" [eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:53:36.314955  672737 system_pods.go:61] "kube-apiserver-newest-cni-190708" [6f2a10a0-1e97-46ef-831c-c648f1ead906] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:53:36.314961  672737 system_pods.go:61] "kube-controller-manager-newest-cni-190708" [2fd054d9-c518-4415-8279-b247bb13d91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:53:36.314969  672737 system_pods.go:61] "kube-proxy-v7xgj" [9620c4c3-352a-4d93-8d43-f7a06fcd3374] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:53:36.314976  672737 system_pods.go:61] "kube-scheduler-newest-cni-190708" [8d1175ee-58dc-471d-856b-87d65a82c0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:53:36.314981  672737 system_pods.go:61] "storage-provisioner" [d9659c6a-9cea-4234-aaf7-baafb55fcf58] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:53:36.314992  672737 system_pods.go:74] duration metric: took 3.173905ms to wait for pod list to return data ...
	I1019 12:53:36.315000  672737 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:53:36.315055  672737 addons.go:514] duration metric: took 527.155312ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 12:53:36.317196  672737 default_sa.go:45] found service account: "default"
	I1019 12:53:36.317218  672737 default_sa.go:55] duration metric: took 2.212206ms for default service account to be created ...
	I1019 12:53:36.317230  672737 kubeadm.go:586] duration metric: took 529.375092ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:53:36.317251  672737 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:53:36.319523  672737 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:53:36.319545  672737 node_conditions.go:123] node cpu capacity is 8
	I1019 12:53:36.319557  672737 node_conditions.go:105] duration metric: took 2.300039ms to run NodePressure ...
	I1019 12:53:36.319567  672737 start.go:241] waiting for startup goroutines ...
	I1019 12:53:36.557265  672737 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-190708" context rescaled to 1 replicas
	I1019 12:53:36.557311  672737 start.go:246] waiting for cluster config update ...
	I1019 12:53:36.557328  672737 start.go:255] writing updated cluster config ...
	I1019 12:53:36.557703  672737 ssh_runner.go:195] Run: rm -f paused
	I1019 12:53:36.609706  672737 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:36.612691  672737 out.go:179] * Done! kubectl is now configured to use "newest-cni-190708" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:53:04 embed-certs-123864 crio[561]: time="2025-10-19T12:53:04.690355084Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:53:04 embed-certs-123864 crio[561]: time="2025-10-19T12:53:04.694059835Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:53:04 embed-certs-123864 crio[561]: time="2025-10-19T12:53:04.694086905Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.814076Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ed00c565-41b5-4fa0-a40d-b2326db60601 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.816757129Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3fed050f-5e63-41ab-baca-97b6e152924c name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.820112474Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j/dashboard-metrics-scraper" id=b960e60f-d4d8-4b5e-8891-978a145e024a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.822130907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.828263773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.82894693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.858056235Z" level=info msg="Created container a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j/dashboard-metrics-scraper" id=b960e60f-d4d8-4b5e-8891-978a145e024a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.858674367Z" level=info msg="Starting container: a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4" id=a3967556-14a5-4829-8a8a-faa4b362c425 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.860409007Z" level=info msg="Started container" PID=1753 containerID=a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j/dashboard-metrics-scraper id=a3967556-14a5-4829-8a8a-faa4b362c425 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a883e16d7bc14275a4e818b7858fcf8387529de2ded29cde73a09745bbfb6a65
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.927498478Z" level=info msg="Removing container: 673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2" id=ebcc5abd-cc59-493d-adc1-55fc8d55f317 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.938078974Z" level=info msg="Removed container 673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j/dashboard-metrics-scraper" id=ebcc5abd-cc59-493d-adc1-55fc8d55f317 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.944619Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5ead4397-cfc6-49c2-b9fb-45a0b5a3ce9d name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.945591245Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=90197312-4ced-45f8-9103-8fc89f74933d name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.946675202Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6843691f-9a3e-4199-b141-7ba4952c861f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.946958387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.951960602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.952159678Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9d6dcbd12f650ab6877b6e7b6a1ed7d676e45a24840460cdb67f22d9de3d27f1/merged/etc/passwd: no such file or directory"
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.952192335Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9d6dcbd12f650ab6877b6e7b6a1ed7d676e45a24840460cdb67f22d9de3d27f1/merged/etc/group: no such file or directory"
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.952501088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.976245526Z" level=info msg="Created container 120f5bcceb6a3b5688f01e27d335bead98c322d2007e7d8ca8429a1a4fd15394: kube-system/storage-provisioner/storage-provisioner" id=6843691f-9a3e-4199-b141-7ba4952c861f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.976831542Z" level=info msg="Starting container: 120f5bcceb6a3b5688f01e27d335bead98c322d2007e7d8ca8429a1a4fd15394" id=fe480c42-5ff0-4b35-8925-105f3d6b38f8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.978491427Z" level=info msg="Started container" PID=1767 containerID=120f5bcceb6a3b5688f01e27d335bead98c322d2007e7d8ca8429a1a4fd15394 description=kube-system/storage-provisioner/storage-provisioner id=fe480c42-5ff0-4b35-8925-105f3d6b38f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=061ffbf2eae7a5bff5d5bf2d77fbbb1b2373fe2a401b5c5aa14af44f68af45d7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	120f5bcceb6a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   061ffbf2eae7a       storage-provisioner                          kube-system
	a632aa823b9fc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   a883e16d7bc14       dashboard-metrics-scraper-6ffb444bf9-64x9j   kubernetes-dashboard
	60dc588bc47f0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   dda631042a0ac       kubernetes-dashboard-855c9754f9-b55t5        kubernetes-dashboard
	5d92a5a60399f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   10c470c3a1cf7       coredns-66bc5c9577-bw9l4                     kube-system
	f8a571de676c8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   bbc62fa754a1c       busybox                                      default
	0bc1ee77f0b5e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   0b0c2994533ca       kube-proxy-gvrcz                             kube-system
	b5ad804329727       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   402b70b14c518       kindnet-zkvs7                                kube-system
	6db88a089aeb9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   061ffbf2eae7a       storage-provisioner                          kube-system
	0d6bd37e74ce4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   a3c89edce9516       kube-controller-manager-embed-certs-123864   kube-system
	2948778c0277b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   a1e637d500143       kube-scheduler-embed-certs-123864            kube-system
	f0fd8fcb3c6d8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   51601a81a56ad       etcd-embed-certs-123864                      kube-system
	ce30ef8a95f35       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   a16ad5c566f92       kube-apiserver-embed-certs-123864            kube-system
	
	
	==> coredns [5d92a5a60399ff61af8aa305455b29363b439912ce116e9b8a33058d2d2f8903] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44816 - 46985 "HINFO IN 3190984299037100603.72535971130669998. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.094747334s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-123864
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-123864
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=embed-certs-123864
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_51_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:51:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-123864
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:53:23 +0000   Sun, 19 Oct 2025 12:51:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:53:23 +0000   Sun, 19 Oct 2025 12:51:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:53:23 +0000   Sun, 19 Oct 2025 12:51:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:53:23 +0000   Sun, 19 Oct 2025 12:52:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-123864
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                487d540e-33e7-428f-8d26-3b1ead032aff
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 coredns-66bc5c9577-bw9l4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m7s
	  kube-system                 etcd-embed-certs-123864                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m15s
	  kube-system                 kindnet-zkvs7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m8s
	  kube-system                 kube-apiserver-embed-certs-123864             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-embed-certs-123864    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-proxy-gvrcz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-scheduler-embed-certs-123864             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-64x9j    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-b55t5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m7s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m13s              kubelet          Node embed-certs-123864 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s              kubelet          Node embed-certs-123864 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s              kubelet          Node embed-certs-123864 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m13s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m9s               node-controller  Node embed-certs-123864 event: Registered Node embed-certs-123864 in Controller
	  Normal  NodeReady                87s                kubelet          Node embed-certs-123864 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node embed-certs-123864 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node embed-certs-123864 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node embed-certs-123864 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node embed-certs-123864 event: Registered Node embed-certs-123864 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [f0fd8fcb3c6d87abb5a73bdbe32675387cdf9b39fb23cc80e3f9fcee156b57fc] <==
	{"level":"warn","ts":"2025-10-19T12:52:52.252538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.259392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.266268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.276248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.284258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.291325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.298701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.304655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.312324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.320209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.341575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.348758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.364936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.373163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.381124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.388551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.395392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.403789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.412033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.433642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.437418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.444648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.452720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.523651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46686","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T12:53:02.968539Z","caller":"traceutil/trace.go:172","msg":"trace[506846127] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"113.296073ms","start":"2025-10-19T12:53:02.855219Z","end":"2025-10-19T12:53:02.968515Z","steps":["trace[506846127] 'process raft request'  (duration: 113.068692ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:53:42 up  2:36,  0 user,  load average: 3.25, 4.46, 3.06
	Linux embed-certs-123864 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b5ad804329727e632f091f904fd14b6edbd537247928aea461b7f33073a5f96e] <==
	I1019 12:52:54.383272       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:52:54.383522       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 12:52:54.383702       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:52:54.383726       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:52:54.383740       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:52:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:52:54.623313       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:52:54.623346       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:52:54.623369       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:52:54.623508       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:52:54.881580       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:52:54.881698       1 metrics.go:72] Registering metrics
	I1019 12:52:54.881817       1 controller.go:711] "Syncing nftables rules"
	I1019 12:53:04.623597       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 12:53:04.623686       1 main.go:301] handling current node
	I1019 12:53:14.625556       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 12:53:14.625610       1 main.go:301] handling current node
	I1019 12:53:24.624362       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 12:53:24.624394       1 main.go:301] handling current node
	I1019 12:53:34.630146       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 12:53:34.630188       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ce30ef8a95f35deb3f080b7ea813df6a93693594ac7959d6e3a0b79159f36e25] <==
	I1019 12:52:53.115608       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:52:53.118521       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 12:52:53.118632       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 12:52:53.119198       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 12:52:53.119261       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 12:52:53.119308       1 aggregator.go:171] initial CRD sync complete...
	I1019 12:52:53.119339       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 12:52:53.119364       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 12:52:53.119387       1 cache.go:39] Caches are synced for autoregister controller
	E1019 12:52:53.124378       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 12:52:53.130653       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 12:52:53.142042       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 12:52:53.142086       1 policy_source.go:240] refreshing policies
	I1019 12:52:53.181607       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:52:53.454327       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:52:53.486325       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:52:53.509765       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:52:53.520277       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:52:53.526954       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:52:53.563578       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.81.127"}
	I1019 12:52:53.576736       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.143.102"}
	I1019 12:52:54.014803       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:52:56.386191       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:52:56.536025       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:52:56.635838       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0d6bd37e74ce4fd54de1cf8e27fcb93f0da4eae636f80ecf509c242bba0ab6b4] <==
	I1019 12:52:56.031925       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 12:52:56.032030       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 12:52:56.032111       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-123864"
	I1019 12:52:56.032196       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 12:52:56.032252       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 12:52:56.032319       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 12:52:56.033462       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 12:52:56.036401       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 12:52:56.038047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 12:52:56.038143       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:52:56.038155       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 12:52:56.038156       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:52:56.040280       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:52:56.040368       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 12:52:56.046063       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:52:56.050271       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 12:52:56.053541       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:52:56.056807       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:52:56.059102       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 12:52:56.060315       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:52:56.060315       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:52:56.061546       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 12:52:56.064831       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:52:56.069100       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 12:52:56.074380       1 shared_informer.go:356] "Caches are synced" controller="expand"
	
	
	==> kube-proxy [0bc1ee77f0b5e034f70aae53c104ca5c85bb5db4d83c9b4db7e7ac9e13cfffb0] <==
	I1019 12:52:54.239483       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:52:54.297557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:52:54.398667       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:52:54.398773       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 12:52:54.398934       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:52:54.424161       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:52:54.424281       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:52:54.430882       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:52:54.431310       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:52:54.431613       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:54.434043       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:52:54.434391       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:52:54.434112       1 config.go:200] "Starting service config controller"
	I1019 12:52:54.434478       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:52:54.434716       1 config.go:309] "Starting node config controller"
	I1019 12:52:54.435089       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:52:54.435142       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:52:54.434127       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:52:54.435205       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:52:54.535498       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:52:54.535528       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:52:54.535558       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2948778c0277b5d716b5581d32565f17755bd979469128c13d911b54b47927ea] <==
	I1019 12:52:52.324696       1 serving.go:386] Generated self-signed cert in-memory
	I1019 12:52:53.369816       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 12:52:53.369932       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:53.375746       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 12:52:53.375982       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 12:52:53.376084       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:53.376808       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:52:53.376960       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 12:52:53.376117       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:52:53.377090       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:52:53.376805       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:53.476717       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 12:52:53.477323       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:52:53.477923       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:52:56 embed-certs-123864 kubelet[725]: I1019 12:52:56.599397     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xks8b\" (UniqueName: \"kubernetes.io/projected/479ad879-6024-41ed-a32e-fa719e095f1c-kube-api-access-xks8b\") pod \"dashboard-metrics-scraper-6ffb444bf9-64x9j\" (UID: \"479ad879-6024-41ed-a32e-fa719e095f1c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j"
	Oct 19 12:52:56 embed-certs-123864 kubelet[725]: I1019 12:52:56.599468     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2677e6ff-bf6f-4e47-acea-acc1cfbc5c26-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-b55t5\" (UID: \"2677e6ff-bf6f-4e47-acea-acc1cfbc5c26\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b55t5"
	Oct 19 12:52:56 embed-certs-123864 kubelet[725]: I1019 12:52:56.599498     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lkgn\" (UniqueName: \"kubernetes.io/projected/2677e6ff-bf6f-4e47-acea-acc1cfbc5c26-kube-api-access-2lkgn\") pod \"kubernetes-dashboard-855c9754f9-b55t5\" (UID: \"2677e6ff-bf6f-4e47-acea-acc1cfbc5c26\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b55t5"
	Oct 19 12:52:59 embed-certs-123864 kubelet[725]: I1019 12:52:59.868184     725 scope.go:117] "RemoveContainer" containerID="815d9c8c3ea768b62ddedeafc571e1b36a943e738d5576edefa90dbdbf346d74"
	Oct 19 12:53:00 embed-certs-123864 kubelet[725]: I1019 12:53:00.872974     725 scope.go:117] "RemoveContainer" containerID="815d9c8c3ea768b62ddedeafc571e1b36a943e738d5576edefa90dbdbf346d74"
	Oct 19 12:53:00 embed-certs-123864 kubelet[725]: I1019 12:53:00.873316     725 scope.go:117] "RemoveContainer" containerID="673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2"
	Oct 19 12:53:00 embed-certs-123864 kubelet[725]: E1019 12:53:00.873569     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-64x9j_kubernetes-dashboard(479ad879-6024-41ed-a32e-fa719e095f1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j" podUID="479ad879-6024-41ed-a32e-fa719e095f1c"
	Oct 19 12:53:01 embed-certs-123864 kubelet[725]: I1019 12:53:01.880133     725 scope.go:117] "RemoveContainer" containerID="673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2"
	Oct 19 12:53:01 embed-certs-123864 kubelet[725]: E1019 12:53:01.880384     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-64x9j_kubernetes-dashboard(479ad879-6024-41ed-a32e-fa719e095f1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j" podUID="479ad879-6024-41ed-a32e-fa719e095f1c"
	Oct 19 12:53:03 embed-certs-123864 kubelet[725]: I1019 12:53:03.897416     725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b55t5" podStartSLOduration=1.465436805 podStartE2EDuration="7.89739309s" podCreationTimestamp="2025-10-19 12:52:56 +0000 UTC" firstStartedPulling="2025-10-19 12:52:56.789655719 +0000 UTC m=+6.072370541" lastFinishedPulling="2025-10-19 12:53:03.221612016 +0000 UTC m=+12.504326826" observedRunningTime="2025-10-19 12:53:03.897143283 +0000 UTC m=+13.179858111" watchObservedRunningTime="2025-10-19 12:53:03.89739309 +0000 UTC m=+13.180107919"
	Oct 19 12:53:06 embed-certs-123864 kubelet[725]: I1019 12:53:06.620686     725 scope.go:117] "RemoveContainer" containerID="673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2"
	Oct 19 12:53:06 embed-certs-123864 kubelet[725]: E1019 12:53:06.620913     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-64x9j_kubernetes-dashboard(479ad879-6024-41ed-a32e-fa719e095f1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j" podUID="479ad879-6024-41ed-a32e-fa719e095f1c"
	Oct 19 12:53:18 embed-certs-123864 kubelet[725]: I1019 12:53:18.813548     725 scope.go:117] "RemoveContainer" containerID="673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2"
	Oct 19 12:53:18 embed-certs-123864 kubelet[725]: I1019 12:53:18.926138     725 scope.go:117] "RemoveContainer" containerID="673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2"
	Oct 19 12:53:18 embed-certs-123864 kubelet[725]: I1019 12:53:18.926476     725 scope.go:117] "RemoveContainer" containerID="a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4"
	Oct 19 12:53:18 embed-certs-123864 kubelet[725]: E1019 12:53:18.926687     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-64x9j_kubernetes-dashboard(479ad879-6024-41ed-a32e-fa719e095f1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j" podUID="479ad879-6024-41ed-a32e-fa719e095f1c"
	Oct 19 12:53:24 embed-certs-123864 kubelet[725]: I1019 12:53:24.944213     725 scope.go:117] "RemoveContainer" containerID="6db88a089aeb9f19d418320370a192296cab04bf8fa4ea3cf27af48515e8871c"
	Oct 19 12:53:26 embed-certs-123864 kubelet[725]: I1019 12:53:26.620949     725 scope.go:117] "RemoveContainer" containerID="a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4"
	Oct 19 12:53:26 embed-certs-123864 kubelet[725]: E1019 12:53:26.621163     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-64x9j_kubernetes-dashboard(479ad879-6024-41ed-a32e-fa719e095f1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j" podUID="479ad879-6024-41ed-a32e-fa719e095f1c"
	Oct 19 12:53:38 embed-certs-123864 kubelet[725]: I1019 12:53:38.813659     725 scope.go:117] "RemoveContainer" containerID="a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4"
	Oct 19 12:53:38 embed-certs-123864 kubelet[725]: E1019 12:53:38.813869     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-64x9j_kubernetes-dashboard(479ad879-6024-41ed-a32e-fa719e095f1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j" podUID="479ad879-6024-41ed-a32e-fa719e095f1c"
	Oct 19 12:53:39 embed-certs-123864 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 12:53:39 embed-certs-123864 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 12:53:39 embed-certs-123864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 12:53:39 embed-certs-123864 systemd[1]: kubelet.service: Consumed 1.605s CPU time.
	
	
	==> kubernetes-dashboard [60dc588bc47f0889522b49eb992e43c19d34cefe4a48f5c81a8b0e95a7f16696] <==
	2025/10/19 12:53:03 Using namespace: kubernetes-dashboard
	2025/10/19 12:53:03 Using in-cluster config to connect to apiserver
	2025/10/19 12:53:03 Using secret token for csrf signing
	2025/10/19 12:53:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 12:53:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 12:53:03 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 12:53:03 Generating JWE encryption key
	2025/10/19 12:53:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 12:53:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 12:53:03 Initializing JWE encryption key from synchronized object
	2025/10/19 12:53:03 Creating in-cluster Sidecar client
	2025/10/19 12:53:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 12:53:03 Serving insecurely on HTTP port: 9090
	2025/10/19 12:53:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 12:53:03 Starting overwatch
	
	
	==> storage-provisioner [120f5bcceb6a3b5688f01e27d335bead98c322d2007e7d8ca8429a1a4fd15394] <==
	I1019 12:53:24.992115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:53:25.000318       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:53:25.000365       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 12:53:25.002852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:28.458458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:32.719295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:36.318242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:39.372089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:42.394266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:42.399512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:53:42.399667       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:53:42.399860       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-123864_0bd36993-e3ed-4277-b534-0e3c4a722321!
	I1019 12:53:42.399857       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"45d62354-4f4f-445a-9d0d-795d15878b3f", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-123864_0bd36993-e3ed-4277-b534-0e3c4a722321 became leader
	W1019 12:53:42.401691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:42.405205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:53:42.500006       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-123864_0bd36993-e3ed-4277-b534-0e3c4a722321!
	
	
	==> storage-provisioner [6db88a089aeb9f19d418320370a192296cab04bf8fa4ea3cf27af48515e8871c] <==
	I1019 12:52:54.210661       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 12:53:24.215945       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-123864 -n embed-certs-123864
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-123864 -n embed-certs-123864: exit status 2 (316.168907ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-123864 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-123864
helpers_test.go:243: (dbg) docker inspect embed-certs-123864:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509",
	        "Created": "2025-10-19T12:51:12.601870775Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 663721,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:52:44.306581522Z",
	            "FinishedAt": "2025-10-19T12:52:43.47687446Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509/hostname",
	        "HostsPath": "/var/lib/docker/containers/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509/hosts",
	        "LogPath": "/var/lib/docker/containers/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509/53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509-json.log",
	        "Name": "/embed-certs-123864",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-123864:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-123864",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53e8a5bc9e53794728d0fd1ce655e25f7fd2a29da4a62cfccd0bb5e39e00d509",
	                "LowerDir": "/var/lib/docker/overlay2/a47111221e0d12e9bca77267d9c1c9e4f1c802b0874f893ca4a091ad9fba6418-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a47111221e0d12e9bca77267d9c1c9e4f1c802b0874f893ca4a091ad9fba6418/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a47111221e0d12e9bca77267d9c1c9e4f1c802b0874f893ca4a091ad9fba6418/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a47111221e0d12e9bca77267d9c1c9e4f1c802b0874f893ca4a091ad9fba6418/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-123864",
	                "Source": "/var/lib/docker/volumes/embed-certs-123864/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-123864",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-123864",
	                "name.minikube.sigs.k8s.io": "embed-certs-123864",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "660fe739191fecd6c47c82610de0ce6eac5d5ed9d24e3f1c9f8c36072b6b1198",
	            "SandboxKey": "/var/run/docker/netns/660fe739191f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-123864": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:4f:ea:d8:58:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fcd0a3e89589b9fe587e991244f1cb1f39b034b86cfecd1e038afdfb125c5bb4",
	                    "EndpointID": "20d2d8872ec6038fe37933db85098208fa811c52be7122f11de7f90e4e687439",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-123864",
	                        "53e8a5bc9e53"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-123864 -n embed-certs-123864
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-123864 -n embed-certs-123864: exit status 2 (305.318454ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-123864 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-123864 logs -n 25: (1.067594603s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-577062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:51 UTC │
	│ start   │ -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:51 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p no-preload-561408 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p embed-certs-123864 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-999693 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-123864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-999693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ old-k8s-version-577062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p old-k8s-version-577062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ no-preload-561408 image list --format=json                                                                                                                                                                                                    │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p no-preload-561408 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ start   │ -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-190708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ stop    │ -p newest-cni-190708 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ embed-certs-123864 image list --format=json                                                                                                                                                                                                   │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p embed-certs-123864 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:53:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:53:11.615027  672737 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:53:11.615299  672737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:11.615311  672737 out.go:374] Setting ErrFile to fd 2...
	I1019 12:53:11.615315  672737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:11.615551  672737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:53:11.616038  672737 out.go:368] Setting JSON to false
	I1019 12:53:11.617746  672737 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9340,"bootTime":1760869052,"procs":566,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:53:11.617846  672737 start.go:141] virtualization: kvm guest
	I1019 12:53:11.619915  672737 out.go:179] * [newest-cni-190708] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:53:11.621699  672737 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:53:11.621736  672737 notify.go:220] Checking for updates...
	I1019 12:53:11.624129  672737 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:53:11.626246  672737 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:53:11.627453  672737 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:53:11.628681  672737 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:53:11.629995  672737 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:53:11.631642  672737 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:11.631786  672737 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:11.631990  672737 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:53:11.658136  672737 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:53:11.658233  672737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:11.722933  672737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-19 12:53:11.711540262 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:11.723046  672737 docker.go:318] overlay module found
	I1019 12:53:11.724874  672737 out.go:179] * Using the docker driver based on user configuration
	I1019 12:53:11.726372  672737 start.go:305] selected driver: docker
	I1019 12:53:11.726394  672737 start.go:925] validating driver "docker" against <nil>
	I1019 12:53:11.726412  672737 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:53:11.727020  672737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:11.787909  672737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-19 12:53:11.778156597 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:11.788107  672737 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1019 12:53:11.788149  672737 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1019 12:53:11.788529  672737 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:53:11.790331  672737 out.go:179] * Using Docker driver with root privileges
	I1019 12:53:11.791430  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:11.791511  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:11.791528  672737 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:53:11.791587  672737 start.go:349] cluster config:
	{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:11.792873  672737 out.go:179] * Starting "newest-cni-190708" primary control-plane node in "newest-cni-190708" cluster
	I1019 12:53:11.794127  672737 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:53:11.795216  672737 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:53:11.796409  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:11.796465  672737 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:53:11.796477  672737 cache.go:58] Caching tarball of preloaded images
	I1019 12:53:11.796486  672737 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:53:11.796551  672737 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:53:11.796562  672737 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:53:11.796649  672737 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:11.796666  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json: {Name:mk458b42b0f9f21f6e5af311f76e8caf9c4c5efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:11.816881  672737 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:53:11.816898  672737 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:53:11.816920  672737 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:53:11.816943  672737 start.go:360] acquireMachinesLock for newest-cni-190708: {Name:mk77ff67117e187a78edba04cd47af082236de6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:53:11.817032  672737 start.go:364] duration metric: took 74.015µs to acquireMachinesLock for "newest-cni-190708"
	I1019 12:53:11.817054  672737 start.go:93] Provisioning new machine with config: &{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:53:11.817117  672737 start.go:125] createHost starting for "" (driver="docker")
	W1019 12:53:09.146473  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:11.146837  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:10.296323  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:12.795707  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:11.818963  672737 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 12:53:11.819197  672737 start.go:159] libmachine.API.Create for "newest-cni-190708" (driver="docker")
	I1019 12:53:11.819227  672737 client.go:168] LocalClient.Create starting
	I1019 12:53:11.819287  672737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem
	I1019 12:53:11.819320  672737 main.go:141] libmachine: Decoding PEM data...
	I1019 12:53:11.819338  672737 main.go:141] libmachine: Parsing certificate...
	I1019 12:53:11.819384  672737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem
	I1019 12:53:11.819402  672737 main.go:141] libmachine: Decoding PEM data...
	I1019 12:53:11.819412  672737 main.go:141] libmachine: Parsing certificate...
	I1019 12:53:11.819803  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:53:11.837346  672737 cli_runner.go:211] docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:53:11.837404  672737 network_create.go:284] running [docker network inspect newest-cni-190708] to gather additional debugging logs...
	I1019 12:53:11.837466  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708
	W1019 12:53:11.853768  672737 cli_runner.go:211] docker network inspect newest-cni-190708 returned with exit code 1
	I1019 12:53:11.853794  672737 network_create.go:287] error running [docker network inspect newest-cni-190708]: docker network inspect newest-cni-190708: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-190708 not found
	I1019 12:53:11.853806  672737 network_create.go:289] output of [docker network inspect newest-cni-190708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-190708 not found
	
	** /stderr **
	I1019 12:53:11.853902  672737 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:53:11.872131  672737 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a4629926c406 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:8c:3f:62:13:f6} reservation:<nil>}
	I1019 12:53:11.872777  672737 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6cccd776798e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:1b:39:ab:6e:7b} reservation:<nil>}
	I1019 12:53:11.873176  672737 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-91914a6ce07e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:1c:aa:a8:a4:4a} reservation:<nil>}
	I1019 12:53:11.873710  672737 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fcd0a3e89589 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:54:90:aa:5c:46} reservation:<nil>}
	I1019 12:53:11.874346  672737 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-de90530a2892 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f2:1b:d3:5b:94:95} reservation:<nil>}
	I1019 12:53:11.875186  672737 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e7d700}
	I1019 12:53:11.875210  672737 network_create.go:124] attempt to create docker network newest-cni-190708 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1019 12:53:11.875256  672737 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-190708 newest-cni-190708
	I1019 12:53:11.933015  672737 network_create.go:108] docker network newest-cni-190708 192.168.94.0/24 created
	I1019 12:53:11.933049  672737 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-190708" container
	I1019 12:53:11.933120  672737 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:53:11.950774  672737 cli_runner.go:164] Run: docker volume create newest-cni-190708 --label name.minikube.sigs.k8s.io=newest-cni-190708 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:53:11.967572  672737 oci.go:103] Successfully created a docker volume newest-cni-190708
	I1019 12:53:11.967650  672737 cli_runner.go:164] Run: docker run --rm --name newest-cni-190708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-190708 --entrypoint /usr/bin/test -v newest-cni-190708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:53:12.367353  672737 oci.go:107] Successfully prepared a docker volume newest-cni-190708
	I1019 12:53:12.367407  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:12.367450  672737 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:53:12.367533  672737 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-190708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 12:53:13.646716  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:15.646757  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:15.295646  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:17.297846  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:16.825912  672737 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-190708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.458335671s)
	I1019 12:53:16.825946  672737 kic.go:203] duration metric: took 4.45849341s to extract preloaded images to volume ...
	W1019 12:53:16.826042  672737 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 12:53:16.826073  672737 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 12:53:16.826110  672737 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 12:53:16.883735  672737 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-190708 --name newest-cni-190708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-190708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-190708 --network newest-cni-190708 --ip 192.168.94.2 --volume newest-cni-190708:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 12:53:17.149721  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Running}}
	I1019 12:53:17.168092  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.187070  672737 cli_runner.go:164] Run: docker exec newest-cni-190708 stat /var/lib/dpkg/alternatives/iptables
	I1019 12:53:17.235594  672737 oci.go:144] the created container "newest-cni-190708" has a running status.
	I1019 12:53:17.235624  672737 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa...
	I1019 12:53:17.641114  672737 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 12:53:17.666983  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.686164  672737 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 12:53:17.686197  672737 kic_runner.go:114] Args: [docker exec --privileged newest-cni-190708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 12:53:17.730607  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.748800  672737 machine.go:93] provisionDockerMachine start ...
	I1019 12:53:17.748886  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:17.768809  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:17.769043  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:17.769056  672737 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:53:17.904434  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:53:17.904466  672737 ubuntu.go:182] provisioning hostname "newest-cni-190708"
	I1019 12:53:17.904532  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:17.923140  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:17.923351  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:17.923364  672737 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-190708 && echo "newest-cni-190708" | sudo tee /etc/hostname
	I1019 12:53:18.066330  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:53:18.066401  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.084720  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:18.084937  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:18.084955  672737 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-190708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-190708/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-190708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:53:18.218215  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:53:18.218243  672737 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:53:18.218295  672737 ubuntu.go:190] setting up certificates
	I1019 12:53:18.218310  672737 provision.go:84] configureAuth start
	I1019 12:53:18.218377  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:18.236696  672737 provision.go:143] copyHostCerts
	I1019 12:53:18.236757  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:53:18.236768  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:53:18.236836  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:53:18.236929  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:53:18.236938  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:53:18.236966  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:53:18.237022  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:53:18.237030  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:53:18.237052  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:53:18.237101  672737 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.newest-cni-190708 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-190708]
	I1019 12:53:18.349002  672737 provision.go:177] copyRemoteCerts
	I1019 12:53:18.349061  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:53:18.349100  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.367380  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:18.464934  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:53:18.484736  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:53:18.502418  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 12:53:18.520374  672737 provision.go:87] duration metric: took 302.043863ms to configureAuth
	I1019 12:53:18.520411  672737 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:53:18.520616  672737 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:18.520715  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.539107  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:18.539337  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:18.539356  672737 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:53:18.783336  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:53:18.783368  672737 machine.go:96] duration metric: took 1.034543859s to provisionDockerMachine
	I1019 12:53:18.783380  672737 client.go:171] duration metric: took 6.964145323s to LocalClient.Create
	I1019 12:53:18.783403  672737 start.go:167] duration metric: took 6.964207211s to libmachine.API.Create "newest-cni-190708"
	I1019 12:53:18.783410  672737 start.go:293] postStartSetup for "newest-cni-190708" (driver="docker")
	I1019 12:53:18.783444  672737 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:53:18.783533  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:53:18.783575  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.802276  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:18.904329  672737 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:53:18.908177  672737 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:53:18.908210  672737 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:53:18.908222  672737 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:53:18.908267  672737 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:53:18.908346  672737 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:53:18.908470  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:53:18.916278  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:53:18.940533  672737 start.go:296] duration metric: took 157.106831ms for postStartSetup
	I1019 12:53:18.940837  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:18.959008  672737 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:18.959254  672737 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:53:18.959294  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.976265  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.069698  672737 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:53:19.074565  672737 start.go:128] duration metric: took 7.257430988s to createHost
	I1019 12:53:19.074635  672737 start.go:83] releasing machines lock for "newest-cni-190708", held for 7.257591431s
	I1019 12:53:19.074702  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:19.092846  672737 ssh_runner.go:195] Run: cat /version.json
	I1019 12:53:19.092896  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:19.092920  672737 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:53:19.092980  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:19.112049  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.112296  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.259186  672737 ssh_runner.go:195] Run: systemctl --version
	I1019 12:53:19.265848  672737 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:53:19.301474  672737 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:53:19.306225  672737 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:53:19.306297  672737 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:53:19.331979  672737 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 12:53:19.332008  672737 start.go:495] detecting cgroup driver to use...
	I1019 12:53:19.332048  672737 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:53:19.332111  672737 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:53:19.348084  672737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:53:19.360773  672737 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:53:19.360844  672737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:53:19.377948  672737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:53:19.395822  672737 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:53:19.484678  672737 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:53:19.575544  672737 docker.go:234] disabling docker service ...
	I1019 12:53:19.575618  672737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:53:19.595378  672737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:53:19.608092  672737 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:53:19.693958  672737 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:53:19.776371  672737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:53:19.789375  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:53:19.804627  672737 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:53:19.804704  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.814787  672737 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:53:19.814837  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.823551  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.832169  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.840784  672737 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:53:19.848724  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.857100  672737 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.870352  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.878731  672737 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:53:19.886348  672737 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:53:19.893759  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:19.973321  672737 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:53:20.077881  672737 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:53:20.077979  672737 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:53:20.082037  672737 start.go:563] Will wait 60s for crictl version
	I1019 12:53:20.082093  672737 ssh_runner.go:195] Run: which crictl
	I1019 12:53:20.085569  672737 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:53:20.109837  672737 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:53:20.109920  672737 ssh_runner.go:195] Run: crio --version
	I1019 12:53:20.138350  672737 ssh_runner.go:195] Run: crio --version
	I1019 12:53:20.168482  672737 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:53:20.169863  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:53:20.188025  672737 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1019 12:53:20.192265  672737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:53:20.203815  672737 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 12:53:20.205047  672737 kubeadm.go:883] updating cluster {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:53:20.205149  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:20.205199  672737 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:53:20.236514  672737 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:53:20.236536  672737 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:53:20.236581  672737 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:53:20.262051  672737 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:53:20.262073  672737 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:53:20.262080  672737 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1019 12:53:20.262171  672737 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-190708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:53:20.262247  672737 ssh_runner.go:195] Run: crio config
	I1019 12:53:20.309916  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:20.309950  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:20.309973  672737 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 12:53:20.310003  672737 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190708 NodeName:newest-cni-190708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:53:20.310145  672737 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-190708"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:53:20.310214  672737 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:53:20.318657  672737 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:53:20.318731  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:53:20.326554  672737 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 12:53:20.339030  672737 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:53:20.354155  672737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1019 12:53:20.366696  672737 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:53:20.370356  672737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:53:20.380455  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:20.458942  672737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:53:20.485015  672737 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708 for IP: 192.168.94.2
	I1019 12:53:20.485043  672737 certs.go:195] generating shared ca certs ...
	I1019 12:53:20.485070  672737 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.485221  672737 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:53:20.485264  672737 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:53:20.485275  672737 certs.go:257] generating profile certs ...
	I1019 12:53:20.485328  672737 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key
	I1019 12:53:20.485348  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt with IP's: []
	I1019 12:53:20.585551  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt ...
	I1019 12:53:20.585580  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt: {Name:mk5251db26990dc5997b9e5853758832f57cf196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.585769  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key ...
	I1019 12:53:20.585781  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key: {Name:mk05802bac0f3e5b3a8b334617d45fe07eee0068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.585867  672737 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd
	I1019 12:53:20.585883  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1019 12:53:20.684366  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd ...
	I1019 12:53:20.684395  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd: {Name:mk395ac2723daa6eac9a1a5448aa56dcc3dae795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.684562  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd ...
	I1019 12:53:20.684576  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd: {Name:mk1d126d0c5513551abbae58673dc597e26ffe4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.684650  672737 certs.go:382] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt
	I1019 12:53:20.684722  672737 certs.go:386] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key
	I1019 12:53:20.684776  672737 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key
	I1019 12:53:20.684791  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt with IP's: []
	I1019 12:53:20.821306  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt ...
	I1019 12:53:20.821336  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt: {Name:mkf04fb8bbf161179ae86ba91d4a80f873fae21e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.821524  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key ...
	I1019 12:53:20.821544  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key: {Name:mk22ac123e8932e8db98bd277997b637ec873079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.821743  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:53:20.821779  672737 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:53:20.821789  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:53:20.821812  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:53:20.821834  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:53:20.821860  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:53:20.821901  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:53:20.822529  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:53:20.843244  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:53:20.860464  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:53:20.877640  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:53:20.895480  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 12:53:20.912797  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:53:20.929757  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:53:20.947521  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:53:20.964869  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:53:20.984248  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:53:21.003061  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:53:21.020532  672737 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:53:21.033435  672737 ssh_runner.go:195] Run: openssl version
	I1019 12:53:21.040056  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:53:21.049001  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.052716  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.052781  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.088149  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:53:21.097154  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:53:21.105495  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.109154  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.109216  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.144296  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:53:21.153347  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:53:21.161940  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.165605  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.165655  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.199345  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:53:21.208215  672737 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:53:21.212056  672737 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 12:53:21.212119  672737 kubeadm.go:400] StartCluster: {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:21.212215  672737 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:53:21.212265  672737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:53:21.240234  672737 cri.go:89] found id: ""
	I1019 12:53:21.240301  672737 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:53:21.248582  672737 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:53:21.256728  672737 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 12:53:21.256801  672737 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:53:21.265096  672737 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 12:53:21.265135  672737 kubeadm.go:157] found existing configuration files:
	
	I1019 12:53:21.265192  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 12:53:21.273544  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 12:53:21.273612  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 12:53:21.282090  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 12:53:21.290396  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 12:53:21.290490  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:53:21.300201  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 12:53:21.308252  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 12:53:21.308306  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:53:21.315749  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 12:53:21.323167  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 12:53:21.323239  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:53:21.330315  672737 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 12:53:21.369107  672737 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 12:53:21.369180  672737 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 12:53:21.390319  672737 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 12:53:21.390379  672737 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 12:53:21.390409  672737 kubeadm.go:318] OS: Linux
	I1019 12:53:21.390480  672737 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 12:53:21.390540  672737 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 12:53:21.390652  672737 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 12:53:21.390735  672737 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 12:53:21.390790  672737 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 12:53:21.390890  672737 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 12:53:21.390973  672737 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 12:53:21.391026  672737 kubeadm.go:318] CGROUPS_IO: enabled
	I1019 12:53:21.449690  672737 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 12:53:21.449859  672737 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 12:53:21.449988  672737 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 12:53:21.458017  672737 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 12:53:21.459979  672737 out.go:252]   - Generating certificates and keys ...
	I1019 12:53:21.460084  672737 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 12:53:21.460184  672737 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1019 12:53:17.646821  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:19.647689  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:19.795394  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:21.795584  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:23.796166  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:21.782609  672737 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 12:53:22.004817  672737 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 12:53:22.154911  672737 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 12:53:22.730145  672737 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 12:53:22.932723  672737 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 12:53:22.932904  672737 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-190708] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 12:53:23.243959  672737 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 12:53:23.244120  672737 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-190708] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 12:53:23.410854  672737 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 12:53:23.472366  672737 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 12:53:23.643869  672737 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 12:53:23.644033  672737 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 12:53:23.711987  672737 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 12:53:24.037993  672737 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 12:53:24.501726  672737 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 12:53:24.744523  672737 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 12:53:24.859147  672737 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 12:53:24.859688  672737 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 12:53:24.863264  672737 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 12:53:24.864642  672737 out.go:252]   - Booting up control plane ...
	I1019 12:53:24.864730  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 12:53:24.864796  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 12:53:24.865498  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 12:53:24.879079  672737 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 12:53:24.879207  672737 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 12:53:24.886821  672737 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 12:53:24.887101  672737 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 12:53:24.887199  672737 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 12:53:24.983491  672737 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 12:53:24.983708  672737 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 12:53:25.984614  672737 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001307224s
	I1019 12:53:25.988599  672737 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 12:53:25.988724  672737 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1019 12:53:25.988848  672737 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 12:53:25.988960  672737 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1019 12:53:22.146944  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:24.647501  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:26.295683  663517 pod_ready.go:94] pod "coredns-66bc5c9577-bw9l4" is "Ready"
	I1019 12:53:26.295713  663517 pod_ready.go:86] duration metric: took 31.505627238s for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.297917  663517 pod_ready.go:83] waiting for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.301953  663517 pod_ready.go:94] pod "etcd-embed-certs-123864" is "Ready"
	I1019 12:53:26.301978  663517 pod_ready.go:86] duration metric: took 4.035262ms for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.304112  663517 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.308120  663517 pod_ready.go:94] pod "kube-apiserver-embed-certs-123864" is "Ready"
	I1019 12:53:26.308144  663517 pod_ready.go:86] duration metric: took 4.009533ms for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.309999  663517 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.494192  663517 pod_ready.go:94] pod "kube-controller-manager-embed-certs-123864" is "Ready"
	I1019 12:53:26.494219  663517 pod_ready.go:86] duration metric: took 184.199033ms for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.694487  663517 pod_ready.go:83] waiting for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.094397  663517 pod_ready.go:94] pod "kube-proxy-gvrcz" is "Ready"
	I1019 12:53:27.094457  663517 pod_ready.go:86] duration metric: took 399.93585ms for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.293675  663517 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.694119  663517 pod_ready.go:94] pod "kube-scheduler-embed-certs-123864" is "Ready"
	I1019 12:53:27.694146  663517 pod_ready.go:86] duration metric: took 400.447048ms for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.694158  663517 pod_ready.go:40] duration metric: took 32.912525222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:53:27.746279  663517 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:27.748237  663517 out.go:179] * Done! kubectl is now configured to use "embed-certs-123864" cluster and "default" namespace by default
	I1019 12:53:27.518915  672737 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.530228054s
	I1019 12:53:28.053793  672737 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.061152071s
	I1019 12:53:29.990081  672737 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001429284s
	I1019 12:53:30.001867  672737 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 12:53:30.014037  672737 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 12:53:30.024140  672737 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 12:53:30.024456  672737 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-190708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 12:53:30.033264  672737 kubeadm.go:318] [bootstrap-token] Using token: gtkds1.9e0h8pmw5r5mqwja
	I1019 12:53:30.034587  672737 out.go:252]   - Configuring RBAC rules ...
	I1019 12:53:30.034754  672737 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 12:53:30.038773  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 12:53:30.045039  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 12:53:30.049009  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 12:53:30.052044  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 12:53:30.054665  672737 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 12:53:30.397490  672737 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 12:53:30.827821  672737 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 12:53:31.396481  672737 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 12:53:31.397310  672737 kubeadm.go:318] 
	I1019 12:53:31.397402  672737 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 12:53:31.397413  672737 kubeadm.go:318] 
	I1019 12:53:31.397551  672737 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 12:53:31.397565  672737 kubeadm.go:318] 
	I1019 12:53:31.397596  672737 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 12:53:31.397650  672737 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 12:53:31.397698  672737 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 12:53:31.397705  672737 kubeadm.go:318] 
	I1019 12:53:31.397749  672737 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 12:53:31.397755  672737 kubeadm.go:318] 
	I1019 12:53:31.397794  672737 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 12:53:31.397800  672737 kubeadm.go:318] 
	I1019 12:53:31.397861  672737 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 12:53:31.397953  672737 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 12:53:31.398040  672737 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 12:53:31.398051  672737 kubeadm.go:318] 
	I1019 12:53:31.398140  672737 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 12:53:31.398207  672737 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 12:53:31.398213  672737 kubeadm.go:318] 
	I1019 12:53:31.398292  672737 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token gtkds1.9e0h8pmw5r5mqwja \
	I1019 12:53:31.398378  672737 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 \
	I1019 12:53:31.398399  672737 kubeadm.go:318] 	--control-plane 
	I1019 12:53:31.398405  672737 kubeadm.go:318] 
	I1019 12:53:31.398523  672737 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 12:53:31.398534  672737 kubeadm.go:318] 
	I1019 12:53:31.398627  672737 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token gtkds1.9e0h8pmw5r5mqwja \
	I1019 12:53:31.398790  672737 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 
	I1019 12:53:31.401824  672737 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 12:53:31.402002  672737 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 12:53:31.402023  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:31.402032  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:31.403960  672737 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 12:53:31.405314  672737 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 12:53:31.410474  672737 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 12:53:31.410496  672737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 12:53:31.424273  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1019 12:53:27.147074  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:29.645647  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:31.646857  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:31.641912  672737 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:53:31.642008  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:31.642011  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-190708 minikube.k8s.io/updated_at=2025_10_19T12_53_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=newest-cni-190708 minikube.k8s.io/primary=true
	I1019 12:53:31.652529  672737 ops.go:34] apiserver oom_adj: -16
	I1019 12:53:31.718996  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:32.219629  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:32.719834  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:33.219813  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:33.719692  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:34.219076  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:34.719433  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.219917  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.719034  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.785029  672737 kubeadm.go:1113] duration metric: took 4.143080971s to wait for elevateKubeSystemPrivileges
	I1019 12:53:35.785068  672737 kubeadm.go:402] duration metric: took 14.57295181s to StartCluster
	I1019 12:53:35.785101  672737 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:35.785174  672737 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:53:35.787497  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:35.787794  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:53:35.787820  672737 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:53:35.787897  672737 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:53:35.787993  672737 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-190708"
	I1019 12:53:35.788017  672737 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-190708"
	I1019 12:53:35.788020  672737 addons.go:69] Setting default-storageclass=true in profile "newest-cni-190708"
	I1019 12:53:35.788053  672737 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-190708"
	I1019 12:53:35.788062  672737 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:35.788057  672737 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:53:35.788500  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.788555  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.789512  672737 out.go:179] * Verifying Kubernetes components...
	I1019 12:53:35.791378  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:35.812380  672737 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1019 12:53:33.646988  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:34.648076  664256 pod_ready.go:94] pod "coredns-66bc5c9577-hftjp" is "Ready"
	I1019 12:53:34.648104  664256 pod_ready.go:86] duration metric: took 36.507165259s for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.650741  664256 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.654523  664256 pod_ready.go:94] pod "etcd-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.654547  664256 pod_ready.go:86] duration metric: took 3.785206ms for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.656429  664256 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.660685  664256 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.660712  664256 pod_ready.go:86] duration metric: took 4.258461ms for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.662348  664256 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.844857  664256 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.844886  664256 pod_ready.go:86] duration metric: took 182.521582ms for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.044783  664256 pod_ready.go:83] waiting for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.445005  664256 pod_ready.go:94] pod "kube-proxy-cjxjt" is "Ready"
	I1019 12:53:35.445031  664256 pod_ready.go:86] duration metric: took 400.222332ms for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.645060  664256 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:36.045246  664256 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:36.045282  664256 pod_ready.go:86] duration metric: took 400.190569ms for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:36.045298  664256 pod_ready.go:40] duration metric: took 37.908676389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:53:36.105764  664256 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:36.108299  664256 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-999693" cluster and "default" namespace by default
	I1019 12:53:35.813186  672737 addons.go:238] Setting addon default-storageclass=true in "newest-cni-190708"
	I1019 12:53:35.813237  672737 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:53:35.813735  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.815209  672737 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:53:35.815225  672737 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:53:35.815282  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:35.843451  672737 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:53:35.843479  672737 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:53:35.843567  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:35.844218  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:35.868726  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:35.877614  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:53:35.929249  672737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:53:35.955142  672737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:53:35.988275  672737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:53:36.052147  672737 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1019 12:53:36.053790  672737 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:53:36.053847  672737 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:53:36.305744  672737 api_server.go:72] duration metric: took 517.881771ms to wait for apiserver process to appear ...
	I1019 12:53:36.305769  672737 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:53:36.305790  672737 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:53:36.310834  672737 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1019 12:53:36.311767  672737 api_server.go:141] control plane version: v1.34.1
	I1019 12:53:36.311798  672737 api_server.go:131] duration metric: took 6.020737ms to wait for apiserver health ...
	I1019 12:53:36.311809  672737 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:53:36.313872  672737 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 12:53:36.314880  672737 system_pods.go:59] 8 kube-system pods found
	I1019 12:53:36.314917  672737 system_pods.go:61] "coredns-66bc5c9577-kp55x" [9a472ee8-8fcb-410c-92d0-6f82b4bacad7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:53:36.314933  672737 system_pods.go:61] "etcd-newest-cni-190708" [2105393f-0676-49e0-aa1c-5efd62f5148c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:53:36.314945  672737 system_pods.go:61] "kindnet-8bb9r" [eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:53:36.314955  672737 system_pods.go:61] "kube-apiserver-newest-cni-190708" [6f2a10a0-1e97-46ef-831c-c648f1ead906] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:53:36.314961  672737 system_pods.go:61] "kube-controller-manager-newest-cni-190708" [2fd054d9-c518-4415-8279-b247bb13d91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:53:36.314969  672737 system_pods.go:61] "kube-proxy-v7xgj" [9620c4c3-352a-4d93-8d43-f7a06fcd3374] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:53:36.314976  672737 system_pods.go:61] "kube-scheduler-newest-cni-190708" [8d1175ee-58dc-471d-856b-87d65a82c0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:53:36.314981  672737 system_pods.go:61] "storage-provisioner" [d9659c6a-9cea-4234-aaf7-baafb55fcf58] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:53:36.314992  672737 system_pods.go:74] duration metric: took 3.173905ms to wait for pod list to return data ...
	I1019 12:53:36.315000  672737 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:53:36.315055  672737 addons.go:514] duration metric: took 527.155312ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 12:53:36.317196  672737 default_sa.go:45] found service account: "default"
	I1019 12:53:36.317218  672737 default_sa.go:55] duration metric: took 2.212206ms for default service account to be created ...
	I1019 12:53:36.317230  672737 kubeadm.go:586] duration metric: took 529.375092ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:53:36.317251  672737 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:53:36.319523  672737 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:53:36.319545  672737 node_conditions.go:123] node cpu capacity is 8
	I1019 12:53:36.319557  672737 node_conditions.go:105] duration metric: took 2.300039ms to run NodePressure ...
	I1019 12:53:36.319567  672737 start.go:241] waiting for startup goroutines ...
	I1019 12:53:36.557265  672737 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-190708" context rescaled to 1 replicas
	I1019 12:53:36.557311  672737 start.go:246] waiting for cluster config update ...
	I1019 12:53:36.557328  672737 start.go:255] writing updated cluster config ...
	I1019 12:53:36.557703  672737 ssh_runner.go:195] Run: rm -f paused
	I1019 12:53:36.609706  672737 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:36.612691  672737 out.go:179] * Done! kubectl is now configured to use "newest-cni-190708" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:53:04 embed-certs-123864 crio[561]: time="2025-10-19T12:53:04.690355084Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:53:04 embed-certs-123864 crio[561]: time="2025-10-19T12:53:04.694059835Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:53:04 embed-certs-123864 crio[561]: time="2025-10-19T12:53:04.694086905Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.814076Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ed00c565-41b5-4fa0-a40d-b2326db60601 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.816757129Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3fed050f-5e63-41ab-baca-97b6e152924c name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.820112474Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j/dashboard-metrics-scraper" id=b960e60f-d4d8-4b5e-8891-978a145e024a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.822130907Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.828263773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.82894693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.858056235Z" level=info msg="Created container a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j/dashboard-metrics-scraper" id=b960e60f-d4d8-4b5e-8891-978a145e024a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.858674367Z" level=info msg="Starting container: a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4" id=a3967556-14a5-4829-8a8a-faa4b362c425 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.860409007Z" level=info msg="Started container" PID=1753 containerID=a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j/dashboard-metrics-scraper id=a3967556-14a5-4829-8a8a-faa4b362c425 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a883e16d7bc14275a4e818b7858fcf8387529de2ded29cde73a09745bbfb6a65
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.927498478Z" level=info msg="Removing container: 673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2" id=ebcc5abd-cc59-493d-adc1-55fc8d55f317 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:53:18 embed-certs-123864 crio[561]: time="2025-10-19T12:53:18.938078974Z" level=info msg="Removed container 673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j/dashboard-metrics-scraper" id=ebcc5abd-cc59-493d-adc1-55fc8d55f317 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.944619Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5ead4397-cfc6-49c2-b9fb-45a0b5a3ce9d name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.945591245Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=90197312-4ced-45f8-9103-8fc89f74933d name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.946675202Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6843691f-9a3e-4199-b141-7ba4952c861f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.946958387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.951960602Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.952159678Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9d6dcbd12f650ab6877b6e7b6a1ed7d676e45a24840460cdb67f22d9de3d27f1/merged/etc/passwd: no such file or directory"
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.952192335Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9d6dcbd12f650ab6877b6e7b6a1ed7d676e45a24840460cdb67f22d9de3d27f1/merged/etc/group: no such file or directory"
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.952501088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.976245526Z" level=info msg="Created container 120f5bcceb6a3b5688f01e27d335bead98c322d2007e7d8ca8429a1a4fd15394: kube-system/storage-provisioner/storage-provisioner" id=6843691f-9a3e-4199-b141-7ba4952c861f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.976831542Z" level=info msg="Starting container: 120f5bcceb6a3b5688f01e27d335bead98c322d2007e7d8ca8429a1a4fd15394" id=fe480c42-5ff0-4b35-8925-105f3d6b38f8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:53:24 embed-certs-123864 crio[561]: time="2025-10-19T12:53:24.978491427Z" level=info msg="Started container" PID=1767 containerID=120f5bcceb6a3b5688f01e27d335bead98c322d2007e7d8ca8429a1a4fd15394 description=kube-system/storage-provisioner/storage-provisioner id=fe480c42-5ff0-4b35-8925-105f3d6b38f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=061ffbf2eae7a5bff5d5bf2d77fbbb1b2373fe2a401b5c5aa14af44f68af45d7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	120f5bcceb6a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   061ffbf2eae7a       storage-provisioner                          kube-system
	a632aa823b9fc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   a883e16d7bc14       dashboard-metrics-scraper-6ffb444bf9-64x9j   kubernetes-dashboard
	60dc588bc47f0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   dda631042a0ac       kubernetes-dashboard-855c9754f9-b55t5        kubernetes-dashboard
	5d92a5a60399f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   10c470c3a1cf7       coredns-66bc5c9577-bw9l4                     kube-system
	f8a571de676c8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   bbc62fa754a1c       busybox                                      default
	0bc1ee77f0b5e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   0b0c2994533ca       kube-proxy-gvrcz                             kube-system
	b5ad804329727       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   402b70b14c518       kindnet-zkvs7                                kube-system
	6db88a089aeb9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   061ffbf2eae7a       storage-provisioner                          kube-system
	0d6bd37e74ce4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   a3c89edce9516       kube-controller-manager-embed-certs-123864   kube-system
	2948778c0277b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   a1e637d500143       kube-scheduler-embed-certs-123864            kube-system
	f0fd8fcb3c6d8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   51601a81a56ad       etcd-embed-certs-123864                      kube-system
	ce30ef8a95f35       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   a16ad5c566f92       kube-apiserver-embed-certs-123864            kube-system
	
	
	==> coredns [5d92a5a60399ff61af8aa305455b29363b439912ce116e9b8a33058d2d2f8903] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44816 - 46985 "HINFO IN 3190984299037100603.72535971130669998. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.094747334s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-123864
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-123864
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=embed-certs-123864
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_51_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:51:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-123864
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:53:23 +0000   Sun, 19 Oct 2025 12:51:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:53:23 +0000   Sun, 19 Oct 2025 12:51:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:53:23 +0000   Sun, 19 Oct 2025 12:51:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:53:23 +0000   Sun, 19 Oct 2025 12:52:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-123864
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                487d540e-33e7-428f-8d26-3b1ead032aff
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-66bc5c9577-bw9l4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m9s
	  kube-system                 etcd-embed-certs-123864                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m17s
	  kube-system                 kindnet-zkvs7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m10s
	  kube-system                 kube-apiserver-embed-certs-123864             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-controller-manager-embed-certs-123864    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-proxy-gvrcz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-scheduler-embed-certs-123864             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-64x9j    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-b55t5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m9s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m15s              kubelet          Node embed-certs-123864 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s              kubelet          Node embed-certs-123864 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s              kubelet          Node embed-certs-123864 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m15s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m11s              node-controller  Node embed-certs-123864 event: Registered Node embed-certs-123864 in Controller
	  Normal  NodeReady                89s                kubelet          Node embed-certs-123864 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node embed-certs-123864 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node embed-certs-123864 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node embed-certs-123864 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node embed-certs-123864 event: Registered Node embed-certs-123864 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [f0fd8fcb3c6d87abb5a73bdbe32675387cdf9b39fb23cc80e3f9fcee156b57fc] <==
	{"level":"warn","ts":"2025-10-19T12:52:52.252538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.259392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.266268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.276248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.284258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.291325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.298701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.304655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.312324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.320209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.341575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.348758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.364936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.373163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.381124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.388551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.395392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.403789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.412033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.433642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.437418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.444648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.452720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:52.523651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46686","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T12:53:02.968539Z","caller":"traceutil/trace.go:172","msg":"trace[506846127] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"113.296073ms","start":"2025-10-19T12:53:02.855219Z","end":"2025-10-19T12:53:02.968515Z","steps":["trace[506846127] 'process raft request'  (duration: 113.068692ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:53:44 up  2:36,  0 user,  load average: 3.25, 4.46, 3.06
	Linux embed-certs-123864 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b5ad804329727e632f091f904fd14b6edbd537247928aea461b7f33073a5f96e] <==
	I1019 12:52:54.383272       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:52:54.383522       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1019 12:52:54.383702       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:52:54.383726       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:52:54.383740       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:52:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:52:54.623313       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:52:54.623346       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:52:54.623369       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:52:54.623508       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:52:54.881580       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:52:54.881698       1 metrics.go:72] Registering metrics
	I1019 12:52:54.881817       1 controller.go:711] "Syncing nftables rules"
	I1019 12:53:04.623597       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 12:53:04.623686       1 main.go:301] handling current node
	I1019 12:53:14.625556       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 12:53:14.625610       1 main.go:301] handling current node
	I1019 12:53:24.624362       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 12:53:24.624394       1 main.go:301] handling current node
	I1019 12:53:34.630146       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 12:53:34.630188       1 main.go:301] handling current node
	I1019 12:53:44.632196       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1019 12:53:44.632234       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ce30ef8a95f35deb3f080b7ea813df6a93693594ac7959d6e3a0b79159f36e25] <==
	I1019 12:52:53.115608       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:52:53.118521       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 12:52:53.118632       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 12:52:53.119198       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 12:52:53.119261       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1019 12:52:53.119308       1 aggregator.go:171] initial CRD sync complete...
	I1019 12:52:53.119339       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 12:52:53.119364       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 12:52:53.119387       1 cache.go:39] Caches are synced for autoregister controller
	E1019 12:52:53.124378       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 12:52:53.130653       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 12:52:53.142042       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 12:52:53.142086       1 policy_source.go:240] refreshing policies
	I1019 12:52:53.181607       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:52:53.454327       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:52:53.486325       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:52:53.509765       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:52:53.520277       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:52:53.526954       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:52:53.563578       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.81.127"}
	I1019 12:52:53.576736       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.143.102"}
	I1019 12:52:54.014803       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:52:56.386191       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:52:56.536025       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:52:56.635838       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0d6bd37e74ce4fd54de1cf8e27fcb93f0da4eae636f80ecf509c242bba0ab6b4] <==
	I1019 12:52:56.031925       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 12:52:56.032030       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 12:52:56.032111       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-123864"
	I1019 12:52:56.032196       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 12:52:56.032252       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 12:52:56.032319       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1019 12:52:56.033462       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 12:52:56.036401       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 12:52:56.038047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 12:52:56.038143       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:52:56.038155       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 12:52:56.038156       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:52:56.040280       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:52:56.040368       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 12:52:56.046063       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:52:56.050271       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1019 12:52:56.053541       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:52:56.056807       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 12:52:56.059102       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 12:52:56.060315       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:52:56.060315       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:52:56.061546       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 12:52:56.064831       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:52:56.069100       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1019 12:52:56.074380       1 shared_informer.go:356] "Caches are synced" controller="expand"
	
	
	==> kube-proxy [0bc1ee77f0b5e034f70aae53c104ca5c85bb5db4d83c9b4db7e7ac9e13cfffb0] <==
	I1019 12:52:54.239483       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:52:54.297557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:52:54.398667       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:52:54.398773       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1019 12:52:54.398934       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:52:54.424161       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:52:54.424281       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:52:54.430882       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:52:54.431310       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:52:54.431613       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:54.434043       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:52:54.434391       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:52:54.434112       1 config.go:200] "Starting service config controller"
	I1019 12:52:54.434478       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:52:54.434716       1 config.go:309] "Starting node config controller"
	I1019 12:52:54.435089       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:52:54.435142       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:52:54.434127       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:52:54.435205       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:52:54.535498       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:52:54.535528       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:52:54.535558       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2948778c0277b5d716b5581d32565f17755bd979469128c13d911b54b47927ea] <==
	I1019 12:52:52.324696       1 serving.go:386] Generated self-signed cert in-memory
	I1019 12:52:53.369816       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 12:52:53.369932       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:53.375746       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 12:52:53.375982       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 12:52:53.376084       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:53.376808       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:52:53.376960       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 12:52:53.376117       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:52:53.377090       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:52:53.376805       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:53.476717       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 12:52:53.477323       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:52:53.477923       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:52:56 embed-certs-123864 kubelet[725]: I1019 12:52:56.599397     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xks8b\" (UniqueName: \"kubernetes.io/projected/479ad879-6024-41ed-a32e-fa719e095f1c-kube-api-access-xks8b\") pod \"dashboard-metrics-scraper-6ffb444bf9-64x9j\" (UID: \"479ad879-6024-41ed-a32e-fa719e095f1c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j"
	Oct 19 12:52:56 embed-certs-123864 kubelet[725]: I1019 12:52:56.599468     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2677e6ff-bf6f-4e47-acea-acc1cfbc5c26-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-b55t5\" (UID: \"2677e6ff-bf6f-4e47-acea-acc1cfbc5c26\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b55t5"
	Oct 19 12:52:56 embed-certs-123864 kubelet[725]: I1019 12:52:56.599498     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lkgn\" (UniqueName: \"kubernetes.io/projected/2677e6ff-bf6f-4e47-acea-acc1cfbc5c26-kube-api-access-2lkgn\") pod \"kubernetes-dashboard-855c9754f9-b55t5\" (UID: \"2677e6ff-bf6f-4e47-acea-acc1cfbc5c26\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b55t5"
	Oct 19 12:52:59 embed-certs-123864 kubelet[725]: I1019 12:52:59.868184     725 scope.go:117] "RemoveContainer" containerID="815d9c8c3ea768b62ddedeafc571e1b36a943e738d5576edefa90dbdbf346d74"
	Oct 19 12:53:00 embed-certs-123864 kubelet[725]: I1019 12:53:00.872974     725 scope.go:117] "RemoveContainer" containerID="815d9c8c3ea768b62ddedeafc571e1b36a943e738d5576edefa90dbdbf346d74"
	Oct 19 12:53:00 embed-certs-123864 kubelet[725]: I1019 12:53:00.873316     725 scope.go:117] "RemoveContainer" containerID="673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2"
	Oct 19 12:53:00 embed-certs-123864 kubelet[725]: E1019 12:53:00.873569     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-64x9j_kubernetes-dashboard(479ad879-6024-41ed-a32e-fa719e095f1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j" podUID="479ad879-6024-41ed-a32e-fa719e095f1c"
	Oct 19 12:53:01 embed-certs-123864 kubelet[725]: I1019 12:53:01.880133     725 scope.go:117] "RemoveContainer" containerID="673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2"
	Oct 19 12:53:01 embed-certs-123864 kubelet[725]: E1019 12:53:01.880384     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-64x9j_kubernetes-dashboard(479ad879-6024-41ed-a32e-fa719e095f1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j" podUID="479ad879-6024-41ed-a32e-fa719e095f1c"
	Oct 19 12:53:03 embed-certs-123864 kubelet[725]: I1019 12:53:03.897416     725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b55t5" podStartSLOduration=1.465436805 podStartE2EDuration="7.89739309s" podCreationTimestamp="2025-10-19 12:52:56 +0000 UTC" firstStartedPulling="2025-10-19 12:52:56.789655719 +0000 UTC m=+6.072370541" lastFinishedPulling="2025-10-19 12:53:03.221612016 +0000 UTC m=+12.504326826" observedRunningTime="2025-10-19 12:53:03.897143283 +0000 UTC m=+13.179858111" watchObservedRunningTime="2025-10-19 12:53:03.89739309 +0000 UTC m=+13.180107919"
	Oct 19 12:53:06 embed-certs-123864 kubelet[725]: I1019 12:53:06.620686     725 scope.go:117] "RemoveContainer" containerID="673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2"
	Oct 19 12:53:06 embed-certs-123864 kubelet[725]: E1019 12:53:06.620913     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-64x9j_kubernetes-dashboard(479ad879-6024-41ed-a32e-fa719e095f1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j" podUID="479ad879-6024-41ed-a32e-fa719e095f1c"
	Oct 19 12:53:18 embed-certs-123864 kubelet[725]: I1019 12:53:18.813548     725 scope.go:117] "RemoveContainer" containerID="673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2"
	Oct 19 12:53:18 embed-certs-123864 kubelet[725]: I1019 12:53:18.926138     725 scope.go:117] "RemoveContainer" containerID="673befa8ab194377be8caa017e667243fd35cbc784b9365698cbda6d6070dba2"
	Oct 19 12:53:18 embed-certs-123864 kubelet[725]: I1019 12:53:18.926476     725 scope.go:117] "RemoveContainer" containerID="a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4"
	Oct 19 12:53:18 embed-certs-123864 kubelet[725]: E1019 12:53:18.926687     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-64x9j_kubernetes-dashboard(479ad879-6024-41ed-a32e-fa719e095f1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j" podUID="479ad879-6024-41ed-a32e-fa719e095f1c"
	Oct 19 12:53:24 embed-certs-123864 kubelet[725]: I1019 12:53:24.944213     725 scope.go:117] "RemoveContainer" containerID="6db88a089aeb9f19d418320370a192296cab04bf8fa4ea3cf27af48515e8871c"
	Oct 19 12:53:26 embed-certs-123864 kubelet[725]: I1019 12:53:26.620949     725 scope.go:117] "RemoveContainer" containerID="a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4"
	Oct 19 12:53:26 embed-certs-123864 kubelet[725]: E1019 12:53:26.621163     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-64x9j_kubernetes-dashboard(479ad879-6024-41ed-a32e-fa719e095f1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j" podUID="479ad879-6024-41ed-a32e-fa719e095f1c"
	Oct 19 12:53:38 embed-certs-123864 kubelet[725]: I1019 12:53:38.813659     725 scope.go:117] "RemoveContainer" containerID="a632aa823b9fc8984bb7482d901a2349151082b67f3599127790b28af1d4fee4"
	Oct 19 12:53:38 embed-certs-123864 kubelet[725]: E1019 12:53:38.813869     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-64x9j_kubernetes-dashboard(479ad879-6024-41ed-a32e-fa719e095f1c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-64x9j" podUID="479ad879-6024-41ed-a32e-fa719e095f1c"
	Oct 19 12:53:39 embed-certs-123864 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 12:53:39 embed-certs-123864 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 12:53:39 embed-certs-123864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 12:53:39 embed-certs-123864 systemd[1]: kubelet.service: Consumed 1.605s CPU time.
	
	
	==> kubernetes-dashboard [60dc588bc47f0889522b49eb992e43c19d34cefe4a48f5c81a8b0e95a7f16696] <==
	2025/10/19 12:53:03 Starting overwatch
	2025/10/19 12:53:03 Using namespace: kubernetes-dashboard
	2025/10/19 12:53:03 Using in-cluster config to connect to apiserver
	2025/10/19 12:53:03 Using secret token for csrf signing
	2025/10/19 12:53:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 12:53:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 12:53:03 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 12:53:03 Generating JWE encryption key
	2025/10/19 12:53:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 12:53:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 12:53:03 Initializing JWE encryption key from synchronized object
	2025/10/19 12:53:03 Creating in-cluster Sidecar client
	2025/10/19 12:53:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 12:53:03 Serving insecurely on HTTP port: 9090
	2025/10/19 12:53:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [120f5bcceb6a3b5688f01e27d335bead98c322d2007e7d8ca8429a1a4fd15394] <==
	I1019 12:53:24.992115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:53:25.000318       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:53:25.000365       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 12:53:25.002852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:28.458458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:32.719295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:36.318242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:39.372089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:42.394266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:42.399512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:53:42.399667       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:53:42.399860       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-123864_0bd36993-e3ed-4277-b534-0e3c4a722321!
	I1019 12:53:42.399857       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"45d62354-4f4f-445a-9d0d-795d15878b3f", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-123864_0bd36993-e3ed-4277-b534-0e3c4a722321 became leader
	W1019 12:53:42.401691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:42.405205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:53:42.500006       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-123864_0bd36993-e3ed-4277-b534-0e3c4a722321!
	W1019 12:53:44.408220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:44.412530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [6db88a089aeb9f19d418320370a192296cab04bf8fa4ea3cf27af48515e8871c] <==
	I1019 12:52:54.210661       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 12:53:24.215945       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-123864 -n embed-certs-123864
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-123864 -n embed-certs-123864: exit status 2 (321.432079ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-123864 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-999693 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-999693 --alsologtostderr -v=1: exit status 80 (1.630317483s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-999693 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:53:47.877915  679099 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:53:47.878221  679099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:47.878239  679099 out.go:374] Setting ErrFile to fd 2...
	I1019 12:53:47.878246  679099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:47.878516  679099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:53:47.878758  679099 out.go:368] Setting JSON to false
	I1019 12:53:47.878799  679099 mustload.go:65] Loading cluster: default-k8s-diff-port-999693
	I1019 12:53:47.879138  679099 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:47.879575  679099 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-999693 --format={{.State.Status}}
	I1019 12:53:47.897158  679099 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:53:47.897536  679099 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:47.956096  679099 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:false NGoroutines:68 SystemTime:2025-10-19 12:53:47.946032022 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:47.956711  679099 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-999693 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 12:53:47.958611  679099 out.go:179] * Pausing node default-k8s-diff-port-999693 ... 
	I1019 12:53:47.959683  679099 host.go:66] Checking if "default-k8s-diff-port-999693" exists ...
	I1019 12:53:47.959970  679099 ssh_runner.go:195] Run: systemctl --version
	I1019 12:53:47.960016  679099 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-999693
	I1019 12:53:47.978152  679099 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/default-k8s-diff-port-999693/id_rsa Username:docker}
	I1019 12:53:48.073673  679099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:48.095132  679099 pause.go:52] kubelet running: true
	I1019 12:53:48.095188  679099 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:48.269140  679099 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:48.269256  679099 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:48.336204  679099 cri.go:89] found id: "3958f67da799089d5c30b63ec7f53c85ee3a7cdf455396407624ee16e946961f"
	I1019 12:53:48.336227  679099 cri.go:89] found id: "78d2ca731e98befb02938c95d004c6de4e1bb290061976cb23bcd09a6b0139e5"
	I1019 12:53:48.336231  679099 cri.go:89] found id: "81423f1b546a04c25757a47a152f0daa3ca35543016899d310a2e1bdf2986375"
	I1019 12:53:48.336234  679099 cri.go:89] found id: "1a511f79ffb7681fd929b4894c4f59a2a44ed69f557e9e40d7d67bdedd66fb6d"
	I1019 12:53:48.336236  679099 cri.go:89] found id: "dd65c0ffcffffaa62043de3c54111cd1ddf6293c650cbd534ce5438d3ee3e784"
	I1019 12:53:48.336239  679099 cri.go:89] found id: "7387a9f9039b6043f8b791c29478a2e313a9c1d07804c55f3bd42e18a02230e4"
	I1019 12:53:48.336242  679099 cri.go:89] found id: "dc93d8bd2fb474180164b7ca4cdad0cbca1bb12056f2ec0109f0fdd3eaff8e74"
	I1019 12:53:48.336244  679099 cri.go:89] found id: "386f63ea17ece706be504558369a24b364237cf65e614304f2e3a200660b929a"
	I1019 12:53:48.336246  679099 cri.go:89] found id: "3d2737d35156d50ddf2521cf937a27d4a3882183759b5bedf15ae21799bc69b0"
	I1019 12:53:48.336253  679099 cri.go:89] found id: "fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498"
	I1019 12:53:48.336256  679099 cri.go:89] found id: "1cd8bcfb5c309260593239de52b34e22550c164bb9abd93b219cb9e1a5bf0fbe"
	I1019 12:53:48.336260  679099 cri.go:89] found id: ""
	I1019 12:53:48.336305  679099 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:48.347827  679099 retry.go:31] will retry after 146.471128ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:48Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:53:48.495251  679099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:48.508562  679099 pause.go:52] kubelet running: false
	I1019 12:53:48.508624  679099 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:48.640268  679099 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:48.640356  679099 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:48.706500  679099 cri.go:89] found id: "3958f67da799089d5c30b63ec7f53c85ee3a7cdf455396407624ee16e946961f"
	I1019 12:53:48.706522  679099 cri.go:89] found id: "78d2ca731e98befb02938c95d004c6de4e1bb290061976cb23bcd09a6b0139e5"
	I1019 12:53:48.706527  679099 cri.go:89] found id: "81423f1b546a04c25757a47a152f0daa3ca35543016899d310a2e1bdf2986375"
	I1019 12:53:48.706530  679099 cri.go:89] found id: "1a511f79ffb7681fd929b4894c4f59a2a44ed69f557e9e40d7d67bdedd66fb6d"
	I1019 12:53:48.706533  679099 cri.go:89] found id: "dd65c0ffcffffaa62043de3c54111cd1ddf6293c650cbd534ce5438d3ee3e784"
	I1019 12:53:48.706535  679099 cri.go:89] found id: "7387a9f9039b6043f8b791c29478a2e313a9c1d07804c55f3bd42e18a02230e4"
	I1019 12:53:48.706538  679099 cri.go:89] found id: "dc93d8bd2fb474180164b7ca4cdad0cbca1bb12056f2ec0109f0fdd3eaff8e74"
	I1019 12:53:48.706540  679099 cri.go:89] found id: "386f63ea17ece706be504558369a24b364237cf65e614304f2e3a200660b929a"
	I1019 12:53:48.706543  679099 cri.go:89] found id: "3d2737d35156d50ddf2521cf937a27d4a3882183759b5bedf15ae21799bc69b0"
	I1019 12:53:48.706552  679099 cri.go:89] found id: "fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498"
	I1019 12:53:48.706556  679099 cri.go:89] found id: "1cd8bcfb5c309260593239de52b34e22550c164bb9abd93b219cb9e1a5bf0fbe"
	I1019 12:53:48.706558  679099 cri.go:89] found id: ""
	I1019 12:53:48.706596  679099 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:48.718340  679099 retry.go:31] will retry after 504.009645ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:48Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:53:49.222623  679099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:53:49.235563  679099 pause.go:52] kubelet running: false
	I1019 12:53:49.235615  679099 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:53:49.372051  679099 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:53:49.372141  679099 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:53:49.437665  679099 cri.go:89] found id: "3958f67da799089d5c30b63ec7f53c85ee3a7cdf455396407624ee16e946961f"
	I1019 12:53:49.437691  679099 cri.go:89] found id: "78d2ca731e98befb02938c95d004c6de4e1bb290061976cb23bcd09a6b0139e5"
	I1019 12:53:49.437706  679099 cri.go:89] found id: "81423f1b546a04c25757a47a152f0daa3ca35543016899d310a2e1bdf2986375"
	I1019 12:53:49.437710  679099 cri.go:89] found id: "1a511f79ffb7681fd929b4894c4f59a2a44ed69f557e9e40d7d67bdedd66fb6d"
	I1019 12:53:49.437712  679099 cri.go:89] found id: "dd65c0ffcffffaa62043de3c54111cd1ddf6293c650cbd534ce5438d3ee3e784"
	I1019 12:53:49.437715  679099 cri.go:89] found id: "7387a9f9039b6043f8b791c29478a2e313a9c1d07804c55f3bd42e18a02230e4"
	I1019 12:53:49.437718  679099 cri.go:89] found id: "dc93d8bd2fb474180164b7ca4cdad0cbca1bb12056f2ec0109f0fdd3eaff8e74"
	I1019 12:53:49.437720  679099 cri.go:89] found id: "386f63ea17ece706be504558369a24b364237cf65e614304f2e3a200660b929a"
	I1019 12:53:49.437722  679099 cri.go:89] found id: "3d2737d35156d50ddf2521cf937a27d4a3882183759b5bedf15ae21799bc69b0"
	I1019 12:53:49.437728  679099 cri.go:89] found id: "fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498"
	I1019 12:53:49.437730  679099 cri.go:89] found id: "1cd8bcfb5c309260593239de52b34e22550c164bb9abd93b219cb9e1a5bf0fbe"
	I1019 12:53:49.437733  679099 cri.go:89] found id: ""
	I1019 12:53:49.437774  679099 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:53:49.451238  679099 out.go:203] 
	W1019 12:53:49.452415  679099 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:53:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:53:49.452445  679099 out.go:285] * 
	* 
	W1019 12:53:49.456923  679099 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:53:49.458037  679099 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-999693 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-999693
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-999693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0",
	        "Created": "2025-10-19T12:51:45.922696096Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 664454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:52:47.156686363Z",
	            "FinishedAt": "2025-10-19T12:52:46.282964524Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0/hosts",
	        "LogPath": "/var/lib/docker/containers/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0-json.log",
	        "Name": "/default-k8s-diff-port-999693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-999693:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-999693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0",
	                "LowerDir": "/var/lib/docker/overlay2/3d016932c7c0e15b8492434e9df816bb70a3f0d2bf447aee756582d31ab21f0c-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d016932c7c0e15b8492434e9df816bb70a3f0d2bf447aee756582d31ab21f0c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d016932c7c0e15b8492434e9df816bb70a3f0d2bf447aee756582d31ab21f0c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d016932c7c0e15b8492434e9df816bb70a3f0d2bf447aee756582d31ab21f0c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-999693",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-999693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-999693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-999693",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-999693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fc2aaa49b456f7607ac4e4ba8ddbb8b60c8574c90462a4f4262df0f28545c55b",
	            "SandboxKey": "/var/run/docker/netns/fc2aaa49b456",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-999693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:ae:10:af:56:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de90530a289272ed110d9eb21157ec5037120fb6575a550c928b9dda03629c85",
	                    "EndpointID": "d8036758ee0cd0ce979b22fbbfdf2bfe27bdd4d51a0d6be413cbaa73cc1b06fa",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-999693",
	                        "1ece3120c0d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693: exit status 2 (305.40704ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-999693 logs -n 25
E1019 12:53:50.664254  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/kindnet-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-999693 logs -n 25: (1.051463435s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p embed-certs-123864 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-999693 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-123864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-999693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ old-k8s-version-577062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p old-k8s-version-577062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ no-preload-561408 image list --format=json                                                                                                                                                                                                    │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p no-preload-561408 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ start   │ -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-190708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ stop    │ -p newest-cni-190708 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ embed-certs-123864 image list --format=json                                                                                                                                                                                                   │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p embed-certs-123864 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p embed-certs-123864                                                                                                                                                                                                                         │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ default-k8s-diff-port-999693 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p default-k8s-diff-port-999693 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p embed-certs-123864                                                                                                                                                                                                                         │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:53:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:53:11.615027  672737 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:53:11.615299  672737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:11.615311  672737 out.go:374] Setting ErrFile to fd 2...
	I1019 12:53:11.615315  672737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:11.615551  672737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:53:11.616038  672737 out.go:368] Setting JSON to false
	I1019 12:53:11.617746  672737 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9340,"bootTime":1760869052,"procs":566,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:53:11.617846  672737 start.go:141] virtualization: kvm guest
	I1019 12:53:11.619915  672737 out.go:179] * [newest-cni-190708] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:53:11.621699  672737 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:53:11.621736  672737 notify.go:220] Checking for updates...
	I1019 12:53:11.624129  672737 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:53:11.626246  672737 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:53:11.627453  672737 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:53:11.628681  672737 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:53:11.629995  672737 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:53:11.631642  672737 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:11.631786  672737 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:11.631990  672737 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:53:11.658136  672737 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:53:11.658233  672737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:11.722933  672737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-19 12:53:11.711540262 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:11.723046  672737 docker.go:318] overlay module found
	I1019 12:53:11.724874  672737 out.go:179] * Using the docker driver based on user configuration
	I1019 12:53:11.726372  672737 start.go:305] selected driver: docker
	I1019 12:53:11.726394  672737 start.go:925] validating driver "docker" against <nil>
	I1019 12:53:11.726412  672737 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:53:11.727020  672737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:11.787909  672737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-19 12:53:11.778156597 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:11.788107  672737 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1019 12:53:11.788149  672737 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1019 12:53:11.788529  672737 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:53:11.790331  672737 out.go:179] * Using Docker driver with root privileges
	I1019 12:53:11.791430  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:11.791511  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:11.791528  672737 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:53:11.791587  672737 start.go:349] cluster config:
	{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:11.792873  672737 out.go:179] * Starting "newest-cni-190708" primary control-plane node in "newest-cni-190708" cluster
	I1019 12:53:11.794127  672737 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:53:11.795216  672737 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:53:11.796409  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:11.796465  672737 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:53:11.796477  672737 cache.go:58] Caching tarball of preloaded images
	I1019 12:53:11.796486  672737 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:53:11.796551  672737 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:53:11.796562  672737 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:53:11.796649  672737 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:11.796666  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json: {Name:mk458b42b0f9f21f6e5af311f76e8caf9c4c5efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:11.816881  672737 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:53:11.816898  672737 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:53:11.816920  672737 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:53:11.816943  672737 start.go:360] acquireMachinesLock for newest-cni-190708: {Name:mk77ff67117e187a78edba04cd47af082236de6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:53:11.817032  672737 start.go:364] duration metric: took 74.015µs to acquireMachinesLock for "newest-cni-190708"
	I1019 12:53:11.817054  672737 start.go:93] Provisioning new machine with config: &{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:53:11.817117  672737 start.go:125] createHost starting for "" (driver="docker")
	W1019 12:53:09.146473  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:11.146837  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:10.296323  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:12.795707  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:11.818963  672737 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 12:53:11.819197  672737 start.go:159] libmachine.API.Create for "newest-cni-190708" (driver="docker")
	I1019 12:53:11.819227  672737 client.go:168] LocalClient.Create starting
	I1019 12:53:11.819287  672737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem
	I1019 12:53:11.819320  672737 main.go:141] libmachine: Decoding PEM data...
	I1019 12:53:11.819338  672737 main.go:141] libmachine: Parsing certificate...
	I1019 12:53:11.819384  672737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem
	I1019 12:53:11.819402  672737 main.go:141] libmachine: Decoding PEM data...
	I1019 12:53:11.819412  672737 main.go:141] libmachine: Parsing certificate...
	I1019 12:53:11.819803  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:53:11.837346  672737 cli_runner.go:211] docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:53:11.837404  672737 network_create.go:284] running [docker network inspect newest-cni-190708] to gather additional debugging logs...
	I1019 12:53:11.837466  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708
	W1019 12:53:11.853768  672737 cli_runner.go:211] docker network inspect newest-cni-190708 returned with exit code 1
	I1019 12:53:11.853794  672737 network_create.go:287] error running [docker network inspect newest-cni-190708]: docker network inspect newest-cni-190708: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-190708 not found
	I1019 12:53:11.853806  672737 network_create.go:289] output of [docker network inspect newest-cni-190708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-190708 not found
	
	** /stderr **
	I1019 12:53:11.853902  672737 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:53:11.872131  672737 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a4629926c406 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:8c:3f:62:13:f6} reservation:<nil>}
	I1019 12:53:11.872777  672737 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6cccd776798e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:1b:39:ab:6e:7b} reservation:<nil>}
	I1019 12:53:11.873176  672737 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-91914a6ce07e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:1c:aa:a8:a4:4a} reservation:<nil>}
	I1019 12:53:11.873710  672737 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fcd0a3e89589 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:54:90:aa:5c:46} reservation:<nil>}
	I1019 12:53:11.874346  672737 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-de90530a2892 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f2:1b:d3:5b:94:95} reservation:<nil>}
	I1019 12:53:11.875186  672737 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e7d700}
	I1019 12:53:11.875210  672737 network_create.go:124] attempt to create docker network newest-cni-190708 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1019 12:53:11.875256  672737 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-190708 newest-cni-190708
	I1019 12:53:11.933015  672737 network_create.go:108] docker network newest-cni-190708 192.168.94.0/24 created
	I1019 12:53:11.933049  672737 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-190708" container
	I1019 12:53:11.933120  672737 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:53:11.950774  672737 cli_runner.go:164] Run: docker volume create newest-cni-190708 --label name.minikube.sigs.k8s.io=newest-cni-190708 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:53:11.967572  672737 oci.go:103] Successfully created a docker volume newest-cni-190708
	I1019 12:53:11.967650  672737 cli_runner.go:164] Run: docker run --rm --name newest-cni-190708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-190708 --entrypoint /usr/bin/test -v newest-cni-190708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:53:12.367353  672737 oci.go:107] Successfully prepared a docker volume newest-cni-190708
	I1019 12:53:12.367407  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:12.367450  672737 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:53:12.367533  672737 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-190708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 12:53:13.646716  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:15.646757  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:15.295646  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:17.297846  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:16.825912  672737 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-190708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.458335671s)
	I1019 12:53:16.825946  672737 kic.go:203] duration metric: took 4.45849341s to extract preloaded images to volume ...
	W1019 12:53:16.826042  672737 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 12:53:16.826073  672737 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 12:53:16.826110  672737 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 12:53:16.883735  672737 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-190708 --name newest-cni-190708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-190708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-190708 --network newest-cni-190708 --ip 192.168.94.2 --volume newest-cni-190708:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 12:53:17.149721  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Running}}
	I1019 12:53:17.168092  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.187070  672737 cli_runner.go:164] Run: docker exec newest-cni-190708 stat /var/lib/dpkg/alternatives/iptables
	I1019 12:53:17.235594  672737 oci.go:144] the created container "newest-cni-190708" has a running status.
	I1019 12:53:17.235624  672737 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa...
	I1019 12:53:17.641114  672737 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 12:53:17.666983  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.686164  672737 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 12:53:17.686197  672737 kic_runner.go:114] Args: [docker exec --privileged newest-cni-190708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 12:53:17.730607  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.748800  672737 machine.go:93] provisionDockerMachine start ...
	I1019 12:53:17.748886  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:17.768809  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:17.769043  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:17.769056  672737 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:53:17.904434  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:53:17.904466  672737 ubuntu.go:182] provisioning hostname "newest-cni-190708"
	I1019 12:53:17.904532  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:17.923140  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:17.923351  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:17.923364  672737 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-190708 && echo "newest-cni-190708" | sudo tee /etc/hostname
	I1019 12:53:18.066330  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:53:18.066401  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.084720  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:18.084937  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:18.084955  672737 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-190708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-190708/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-190708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:53:18.218215  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:53:18.218243  672737 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:53:18.218295  672737 ubuntu.go:190] setting up certificates
	I1019 12:53:18.218310  672737 provision.go:84] configureAuth start
	I1019 12:53:18.218377  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:18.236696  672737 provision.go:143] copyHostCerts
	I1019 12:53:18.236757  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:53:18.236768  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:53:18.236836  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:53:18.236929  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:53:18.236938  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:53:18.236966  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:53:18.237022  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:53:18.237030  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:53:18.237052  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:53:18.237101  672737 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.newest-cni-190708 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-190708]
	I1019 12:53:18.349002  672737 provision.go:177] copyRemoteCerts
	I1019 12:53:18.349061  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:53:18.349100  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.367380  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:18.464934  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:53:18.484736  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:53:18.502418  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 12:53:18.520374  672737 provision.go:87] duration metric: took 302.043863ms to configureAuth
	I1019 12:53:18.520411  672737 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:53:18.520616  672737 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:18.520715  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.539107  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:18.539337  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:18.539356  672737 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:53:18.783336  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:53:18.783368  672737 machine.go:96] duration metric: took 1.034543859s to provisionDockerMachine
	I1019 12:53:18.783380  672737 client.go:171] duration metric: took 6.964145323s to LocalClient.Create
	I1019 12:53:18.783403  672737 start.go:167] duration metric: took 6.964207211s to libmachine.API.Create "newest-cni-190708"
	I1019 12:53:18.783410  672737 start.go:293] postStartSetup for "newest-cni-190708" (driver="docker")
	I1019 12:53:18.783444  672737 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:53:18.783533  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:53:18.783575  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.802276  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:18.904329  672737 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:53:18.908177  672737 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:53:18.908210  672737 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:53:18.908222  672737 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:53:18.908267  672737 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:53:18.908346  672737 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:53:18.908470  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:53:18.916278  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:53:18.940533  672737 start.go:296] duration metric: took 157.106831ms for postStartSetup
	I1019 12:53:18.940837  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:18.959008  672737 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:18.959254  672737 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:53:18.959294  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.976265  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.069698  672737 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:53:19.074565  672737 start.go:128] duration metric: took 7.257430988s to createHost
	I1019 12:53:19.074635  672737 start.go:83] releasing machines lock for "newest-cni-190708", held for 7.257591431s
	I1019 12:53:19.074702  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:19.092846  672737 ssh_runner.go:195] Run: cat /version.json
	I1019 12:53:19.092896  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:19.092920  672737 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:53:19.092980  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:19.112049  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.112296  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.259186  672737 ssh_runner.go:195] Run: systemctl --version
	I1019 12:53:19.265848  672737 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:53:19.301474  672737 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:53:19.306225  672737 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:53:19.306297  672737 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:53:19.331979  672737 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 12:53:19.332008  672737 start.go:495] detecting cgroup driver to use...
	I1019 12:53:19.332048  672737 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:53:19.332111  672737 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:53:19.348084  672737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:53:19.360773  672737 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:53:19.360844  672737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:53:19.377948  672737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:53:19.395822  672737 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:53:19.484678  672737 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:53:19.575544  672737 docker.go:234] disabling docker service ...
	I1019 12:53:19.575618  672737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:53:19.595378  672737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:53:19.608092  672737 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:53:19.693958  672737 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:53:19.776371  672737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:53:19.789375  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:53:19.804627  672737 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:53:19.804704  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.814787  672737 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:53:19.814837  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.823551  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.832169  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.840784  672737 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:53:19.848724  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.857100  672737 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.870352  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.878731  672737 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:53:19.886348  672737 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:53:19.893759  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:19.973321  672737 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:53:20.077881  672737 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:53:20.077979  672737 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:53:20.082037  672737 start.go:563] Will wait 60s for crictl version
	I1019 12:53:20.082093  672737 ssh_runner.go:195] Run: which crictl
	I1019 12:53:20.085569  672737 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:53:20.109837  672737 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:53:20.109920  672737 ssh_runner.go:195] Run: crio --version
	I1019 12:53:20.138350  672737 ssh_runner.go:195] Run: crio --version
	I1019 12:53:20.168482  672737 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:53:20.169863  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:53:20.188025  672737 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1019 12:53:20.192265  672737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:53:20.203815  672737 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 12:53:20.205047  672737 kubeadm.go:883] updating cluster {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:53:20.205149  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:20.205199  672737 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:53:20.236514  672737 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:53:20.236536  672737 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:53:20.236581  672737 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:53:20.262051  672737 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:53:20.262073  672737 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:53:20.262080  672737 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1019 12:53:20.262171  672737 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-190708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:53:20.262247  672737 ssh_runner.go:195] Run: crio config
	I1019 12:53:20.309916  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:20.309950  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:20.309973  672737 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 12:53:20.310003  672737 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190708 NodeName:newest-cni-190708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:53:20.310145  672737 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-190708"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:53:20.310214  672737 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:53:20.318657  672737 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:53:20.318731  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:53:20.326554  672737 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 12:53:20.339030  672737 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:53:20.354155  672737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1019 12:53:20.366696  672737 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:53:20.370356  672737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:53:20.380455  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:20.458942  672737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:53:20.485015  672737 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708 for IP: 192.168.94.2
	I1019 12:53:20.485043  672737 certs.go:195] generating shared ca certs ...
	I1019 12:53:20.485070  672737 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.485221  672737 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:53:20.485264  672737 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:53:20.485275  672737 certs.go:257] generating profile certs ...
	I1019 12:53:20.485328  672737 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key
	I1019 12:53:20.485348  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt with IP's: []
	I1019 12:53:20.585551  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt ...
	I1019 12:53:20.585580  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt: {Name:mk5251db26990dc5997b9e5853758832f57cf196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.585769  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key ...
	I1019 12:53:20.585781  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key: {Name:mk05802bac0f3e5b3a8b334617d45fe07eee0068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.585867  672737 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd
	I1019 12:53:20.585883  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1019 12:53:20.684366  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd ...
	I1019 12:53:20.684395  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd: {Name:mk395ac2723daa6eac9a1a5448aa56dcc3dae795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.684562  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd ...
	I1019 12:53:20.684576  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd: {Name:mk1d126d0c5513551abbae58673dc597e26ffe4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.684650  672737 certs.go:382] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt
	I1019 12:53:20.684722  672737 certs.go:386] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key
	I1019 12:53:20.684776  672737 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key
	I1019 12:53:20.684791  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt with IP's: []
	I1019 12:53:20.821306  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt ...
	I1019 12:53:20.821336  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt: {Name:mkf04fb8bbf161179ae86ba91d4a80f873fae21e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.821524  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key ...
	I1019 12:53:20.821544  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key: {Name:mk22ac123e8932e8db98bd277997b637ec873079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.821743  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:53:20.821779  672737 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:53:20.821789  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:53:20.821812  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:53:20.821834  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:53:20.821860  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:53:20.821901  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:53:20.822529  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:53:20.843244  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:53:20.860464  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:53:20.877640  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:53:20.895480  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 12:53:20.912797  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:53:20.929757  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:53:20.947521  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:53:20.964869  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:53:20.984248  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:53:21.003061  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:53:21.020532  672737 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:53:21.033435  672737 ssh_runner.go:195] Run: openssl version
	I1019 12:53:21.040056  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:53:21.049001  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.052716  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.052781  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.088149  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:53:21.097154  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:53:21.105495  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.109154  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.109216  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.144296  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:53:21.153347  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:53:21.161940  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.165605  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.165655  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.199345  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:53:21.208215  672737 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:53:21.212056  672737 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 12:53:21.212119  672737 kubeadm.go:400] StartCluster: {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:21.212215  672737 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:53:21.212265  672737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:53:21.240234  672737 cri.go:89] found id: ""
	I1019 12:53:21.240301  672737 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:53:21.248582  672737 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:53:21.256728  672737 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 12:53:21.256801  672737 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:53:21.265096  672737 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 12:53:21.265135  672737 kubeadm.go:157] found existing configuration files:
	
	I1019 12:53:21.265192  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 12:53:21.273544  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 12:53:21.273612  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 12:53:21.282090  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 12:53:21.290396  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 12:53:21.290490  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:53:21.300201  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 12:53:21.308252  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 12:53:21.308306  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:53:21.315749  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 12:53:21.323167  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 12:53:21.323239  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:53:21.330315  672737 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 12:53:21.369107  672737 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 12:53:21.369180  672737 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 12:53:21.390319  672737 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 12:53:21.390379  672737 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 12:53:21.390409  672737 kubeadm.go:318] OS: Linux
	I1019 12:53:21.390480  672737 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 12:53:21.390540  672737 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 12:53:21.390652  672737 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 12:53:21.390735  672737 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 12:53:21.390790  672737 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 12:53:21.390890  672737 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 12:53:21.390973  672737 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 12:53:21.391026  672737 kubeadm.go:318] CGROUPS_IO: enabled
	I1019 12:53:21.449690  672737 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 12:53:21.449859  672737 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 12:53:21.449988  672737 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 12:53:21.458017  672737 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 12:53:21.459979  672737 out.go:252]   - Generating certificates and keys ...
	I1019 12:53:21.460084  672737 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 12:53:21.460184  672737 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1019 12:53:17.646821  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:19.647689  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:19.795394  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:21.795584  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:23.796166  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:21.782609  672737 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 12:53:22.004817  672737 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 12:53:22.154911  672737 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 12:53:22.730145  672737 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 12:53:22.932723  672737 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 12:53:22.932904  672737 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-190708] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 12:53:23.243959  672737 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 12:53:23.244120  672737 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-190708] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 12:53:23.410854  672737 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 12:53:23.472366  672737 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 12:53:23.643869  672737 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 12:53:23.644033  672737 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 12:53:23.711987  672737 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 12:53:24.037993  672737 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 12:53:24.501726  672737 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 12:53:24.744523  672737 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 12:53:24.859147  672737 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 12:53:24.859688  672737 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 12:53:24.863264  672737 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 12:53:24.864642  672737 out.go:252]   - Booting up control plane ...
	I1019 12:53:24.864730  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 12:53:24.864796  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 12:53:24.865498  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 12:53:24.879079  672737 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 12:53:24.879207  672737 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 12:53:24.886821  672737 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 12:53:24.887101  672737 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 12:53:24.887199  672737 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 12:53:24.983491  672737 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 12:53:24.983708  672737 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 12:53:25.984614  672737 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001307224s
	I1019 12:53:25.988599  672737 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 12:53:25.988724  672737 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1019 12:53:25.988848  672737 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 12:53:25.988960  672737 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1019 12:53:22.146944  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:24.647501  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:26.295683  663517 pod_ready.go:94] pod "coredns-66bc5c9577-bw9l4" is "Ready"
	I1019 12:53:26.295713  663517 pod_ready.go:86] duration metric: took 31.505627238s for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.297917  663517 pod_ready.go:83] waiting for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.301953  663517 pod_ready.go:94] pod "etcd-embed-certs-123864" is "Ready"
	I1019 12:53:26.301978  663517 pod_ready.go:86] duration metric: took 4.035262ms for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.304112  663517 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.308120  663517 pod_ready.go:94] pod "kube-apiserver-embed-certs-123864" is "Ready"
	I1019 12:53:26.308144  663517 pod_ready.go:86] duration metric: took 4.009533ms for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.309999  663517 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.494192  663517 pod_ready.go:94] pod "kube-controller-manager-embed-certs-123864" is "Ready"
	I1019 12:53:26.494219  663517 pod_ready.go:86] duration metric: took 184.199033ms for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.694487  663517 pod_ready.go:83] waiting for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.094397  663517 pod_ready.go:94] pod "kube-proxy-gvrcz" is "Ready"
	I1019 12:53:27.094457  663517 pod_ready.go:86] duration metric: took 399.93585ms for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.293675  663517 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.694119  663517 pod_ready.go:94] pod "kube-scheduler-embed-certs-123864" is "Ready"
	I1019 12:53:27.694146  663517 pod_ready.go:86] duration metric: took 400.447048ms for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.694158  663517 pod_ready.go:40] duration metric: took 32.912525222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:53:27.746279  663517 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:27.748237  663517 out.go:179] * Done! kubectl is now configured to use "embed-certs-123864" cluster and "default" namespace by default
	I1019 12:53:27.518915  672737 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.530228054s
	I1019 12:53:28.053793  672737 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.061152071s
	I1019 12:53:29.990081  672737 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001429284s
	I1019 12:53:30.001867  672737 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 12:53:30.014037  672737 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 12:53:30.024140  672737 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 12:53:30.024456  672737 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-190708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 12:53:30.033264  672737 kubeadm.go:318] [bootstrap-token] Using token: gtkds1.9e0h8pmw5r5mqwja
	I1019 12:53:30.034587  672737 out.go:252]   - Configuring RBAC rules ...
	I1019 12:53:30.034754  672737 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 12:53:30.038773  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 12:53:30.045039  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 12:53:30.049009  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 12:53:30.052044  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 12:53:30.054665  672737 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 12:53:30.397490  672737 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 12:53:30.827821  672737 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 12:53:31.396481  672737 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 12:53:31.397310  672737 kubeadm.go:318] 
	I1019 12:53:31.397402  672737 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 12:53:31.397413  672737 kubeadm.go:318] 
	I1019 12:53:31.397551  672737 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 12:53:31.397565  672737 kubeadm.go:318] 
	I1019 12:53:31.397596  672737 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 12:53:31.397650  672737 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 12:53:31.397698  672737 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 12:53:31.397705  672737 kubeadm.go:318] 
	I1019 12:53:31.397749  672737 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 12:53:31.397755  672737 kubeadm.go:318] 
	I1019 12:53:31.397794  672737 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 12:53:31.397800  672737 kubeadm.go:318] 
	I1019 12:53:31.397861  672737 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 12:53:31.397953  672737 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 12:53:31.398040  672737 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 12:53:31.398051  672737 kubeadm.go:318] 
	I1019 12:53:31.398140  672737 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 12:53:31.398207  672737 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 12:53:31.398213  672737 kubeadm.go:318] 
	I1019 12:53:31.398292  672737 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token gtkds1.9e0h8pmw5r5mqwja \
	I1019 12:53:31.398378  672737 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 \
	I1019 12:53:31.398399  672737 kubeadm.go:318] 	--control-plane 
	I1019 12:53:31.398405  672737 kubeadm.go:318] 
	I1019 12:53:31.398523  672737 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 12:53:31.398534  672737 kubeadm.go:318] 
	I1019 12:53:31.398627  672737 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token gtkds1.9e0h8pmw5r5mqwja \
	I1019 12:53:31.398790  672737 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 
	I1019 12:53:31.401824  672737 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 12:53:31.402002  672737 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 12:53:31.402023  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:31.402032  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:31.403960  672737 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 12:53:31.405314  672737 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 12:53:31.410474  672737 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 12:53:31.410496  672737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 12:53:31.424273  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1019 12:53:27.147074  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:29.645647  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:31.646857  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:31.641912  672737 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:53:31.642008  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:31.642011  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-190708 minikube.k8s.io/updated_at=2025_10_19T12_53_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=newest-cni-190708 minikube.k8s.io/primary=true
	I1019 12:53:31.652529  672737 ops.go:34] apiserver oom_adj: -16
	I1019 12:53:31.718996  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:32.219629  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:32.719834  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:33.219813  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:33.719692  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:34.219076  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:34.719433  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.219917  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.719034  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.785029  672737 kubeadm.go:1113] duration metric: took 4.143080971s to wait for elevateKubeSystemPrivileges
	I1019 12:53:35.785068  672737 kubeadm.go:402] duration metric: took 14.57295181s to StartCluster
	I1019 12:53:35.785101  672737 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:35.785174  672737 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:53:35.787497  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:35.787794  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:53:35.787820  672737 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:53:35.787897  672737 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:53:35.787993  672737 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-190708"
	I1019 12:53:35.788017  672737 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-190708"
	I1019 12:53:35.788020  672737 addons.go:69] Setting default-storageclass=true in profile "newest-cni-190708"
	I1019 12:53:35.788053  672737 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-190708"
	I1019 12:53:35.788062  672737 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:35.788057  672737 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:53:35.788500  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.788555  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.789512  672737 out.go:179] * Verifying Kubernetes components...
	I1019 12:53:35.791378  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:35.812380  672737 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1019 12:53:33.646988  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:34.648076  664256 pod_ready.go:94] pod "coredns-66bc5c9577-hftjp" is "Ready"
	I1019 12:53:34.648104  664256 pod_ready.go:86] duration metric: took 36.507165259s for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.650741  664256 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.654523  664256 pod_ready.go:94] pod "etcd-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.654547  664256 pod_ready.go:86] duration metric: took 3.785206ms for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.656429  664256 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.660685  664256 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.660712  664256 pod_ready.go:86] duration metric: took 4.258461ms for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.662348  664256 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.844857  664256 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.844886  664256 pod_ready.go:86] duration metric: took 182.521582ms for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.044783  664256 pod_ready.go:83] waiting for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.445005  664256 pod_ready.go:94] pod "kube-proxy-cjxjt" is "Ready"
	I1019 12:53:35.445031  664256 pod_ready.go:86] duration metric: took 400.222332ms for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.645060  664256 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:36.045246  664256 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:36.045282  664256 pod_ready.go:86] duration metric: took 400.190569ms for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:36.045298  664256 pod_ready.go:40] duration metric: took 37.908676389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:53:36.105764  664256 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:36.108299  664256 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-999693" cluster and "default" namespace by default
	I1019 12:53:35.813186  672737 addons.go:238] Setting addon default-storageclass=true in "newest-cni-190708"
	I1019 12:53:35.813237  672737 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:53:35.813735  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.815209  672737 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:53:35.815225  672737 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:53:35.815282  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:35.843451  672737 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:53:35.843479  672737 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:53:35.843567  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:35.844218  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:35.868726  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:35.877614  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:53:35.929249  672737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:53:35.955142  672737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:53:35.988275  672737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:53:36.052147  672737 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1019 12:53:36.053790  672737 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:53:36.053847  672737 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:53:36.305744  672737 api_server.go:72] duration metric: took 517.881771ms to wait for apiserver process to appear ...
	I1019 12:53:36.305769  672737 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:53:36.305790  672737 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:53:36.310834  672737 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1019 12:53:36.311767  672737 api_server.go:141] control plane version: v1.34.1
	I1019 12:53:36.311798  672737 api_server.go:131] duration metric: took 6.020737ms to wait for apiserver health ...
	I1019 12:53:36.311809  672737 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:53:36.313872  672737 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 12:53:36.314880  672737 system_pods.go:59] 8 kube-system pods found
	I1019 12:53:36.314917  672737 system_pods.go:61] "coredns-66bc5c9577-kp55x" [9a472ee8-8fcb-410c-92d0-6f82b4bacad7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:53:36.314933  672737 system_pods.go:61] "etcd-newest-cni-190708" [2105393f-0676-49e0-aa1c-5efd62f5148c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:53:36.314945  672737 system_pods.go:61] "kindnet-8bb9r" [eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:53:36.314955  672737 system_pods.go:61] "kube-apiserver-newest-cni-190708" [6f2a10a0-1e97-46ef-831c-c648f1ead906] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:53:36.314961  672737 system_pods.go:61] "kube-controller-manager-newest-cni-190708" [2fd054d9-c518-4415-8279-b247bb13d91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:53:36.314969  672737 system_pods.go:61] "kube-proxy-v7xgj" [9620c4c3-352a-4d93-8d43-f7a06fcd3374] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:53:36.314976  672737 system_pods.go:61] "kube-scheduler-newest-cni-190708" [8d1175ee-58dc-471d-856b-87d65a82c0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:53:36.314981  672737 system_pods.go:61] "storage-provisioner" [d9659c6a-9cea-4234-aaf7-baafb55fcf58] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:53:36.314992  672737 system_pods.go:74] duration metric: took 3.173905ms to wait for pod list to return data ...
	I1019 12:53:36.315000  672737 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:53:36.315055  672737 addons.go:514] duration metric: took 527.155312ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 12:53:36.317196  672737 default_sa.go:45] found service account: "default"
	I1019 12:53:36.317218  672737 default_sa.go:55] duration metric: took 2.212206ms for default service account to be created ...
	I1019 12:53:36.317230  672737 kubeadm.go:586] duration metric: took 529.375092ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:53:36.317251  672737 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:53:36.319523  672737 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:53:36.319545  672737 node_conditions.go:123] node cpu capacity is 8
	I1019 12:53:36.319557  672737 node_conditions.go:105] duration metric: took 2.300039ms to run NodePressure ...
	I1019 12:53:36.319567  672737 start.go:241] waiting for startup goroutines ...
	I1019 12:53:36.557265  672737 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-190708" context rescaled to 1 replicas
	I1019 12:53:36.557311  672737 start.go:246] waiting for cluster config update ...
	I1019 12:53:36.557328  672737 start.go:255] writing updated cluster config ...
	I1019 12:53:36.557703  672737 ssh_runner.go:195] Run: rm -f paused
	I1019 12:53:36.609706  672737 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:36.612691  672737 out.go:179] * Done! kubectl is now configured to use "newest-cni-190708" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:53:07 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:07.88483956Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:53:07 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:07.892886267Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:53:07 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:07.892921327Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.932329695Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8b3e74ed-4f85-482c-87f4-a76ef9aa9099 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.935681047Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b396f35a-7120-4a13-a27c-32c5b73a6685 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.938874512Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp/dashboard-metrics-scraper" id=f22b1c94-815e-4447-9541-b6608a341051 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.94067125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.947666157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.948284651Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.9766739Z" level=info msg="Created container fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp/dashboard-metrics-scraper" id=f22b1c94-815e-4447-9541-b6608a341051 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.977301467Z" level=info msg="Starting container: fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498" id=53e54053-4a21-4a06-afce-ece0565d6426 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.979283846Z" level=info msg="Started container" PID=1741 containerID=fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp/dashboard-metrics-scraper id=53e54053-4a21-4a06-afce-ece0565d6426 name=/runtime.v1.RuntimeService/StartContainer sandboxID=948ed0e57e10af1077e46c4f3013445ab8657025398b6c9deb2ecca75846eedd
	Oct 19 12:53:22 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:22.05253305Z" level=info msg="Removing container: 9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4" id=b4478149-38c6-4304-8ca5-bc8587910e08 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:53:22 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:22.063013739Z" level=info msg="Removed container 9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp/dashboard-metrics-scraper" id=b4478149-38c6-4304-8ca5-bc8587910e08 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.071441936Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2066969c-69c0-4b5e-b886-2de43b3264b1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.072654733Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c1a1bb9c-ecdf-46d6-b1ca-4940e286f6f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.074546867Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=681e82bf-0cc6-4a83-8f5e-954dc4c7b7c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.074969232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.079694523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.079900341Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/009fbe1bc0de7f6b617f455576b723f8436c8152497a23fdc6f0ad8621b2b009/merged/etc/passwd: no such file or directory"
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.079931687Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/009fbe1bc0de7f6b617f455576b723f8436c8152497a23fdc6f0ad8621b2b009/merged/etc/group: no such file or directory"
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.080201445Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.116665943Z" level=info msg="Created container 3958f67da799089d5c30b63ec7f53c85ee3a7cdf455396407624ee16e946961f: kube-system/storage-provisioner/storage-provisioner" id=681e82bf-0cc6-4a83-8f5e-954dc4c7b7c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.117391488Z" level=info msg="Starting container: 3958f67da799089d5c30b63ec7f53c85ee3a7cdf455396407624ee16e946961f" id=cec976e7-ac10-4e69-8268-4c3466fa0e21 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.12097517Z" level=info msg="Started container" PID=1755 containerID=3958f67da799089d5c30b63ec7f53c85ee3a7cdf455396407624ee16e946961f description=kube-system/storage-provisioner/storage-provisioner id=cec976e7-ac10-4e69-8268-4c3466fa0e21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bf31ff487ae9f3e44a9997e108b3767b012e2395e8dc253f5e1e71f2a0fd1473
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	3958f67da7990       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   bf31ff487ae9f       storage-provisioner                                    kube-system
	fdc334ceb1fdf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   948ed0e57e10a       dashboard-metrics-scraper-6ffb444bf9-668bp             kubernetes-dashboard
	1cd8bcfb5c309       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   b3cd80fdf703c       kubernetes-dashboard-855c9754f9-bv5k2                  kubernetes-dashboard
	78d2ca731e98b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   6f7a0a00feef2       coredns-66bc5c9577-hftjp                               kube-system
	b64ed35b37aa5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   6a7e6373a7bc2       busybox                                                default
	81423f1b546a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   bf31ff487ae9f       storage-provisioner                                    kube-system
	1a511f79ffb76       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   c75f8116f699e       kindnet-79bv6                                          kube-system
	dd65c0ffcffff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   44d3ace69f67c       kube-proxy-cjxjt                                       kube-system
	7387a9f9039b6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   10d5a8cf5f66b       kube-scheduler-default-k8s-diff-port-999693            kube-system
	dc93d8bd2fb47       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   6b4bb05fe90f7       kube-apiserver-default-k8s-diff-port-999693            kube-system
	386f63ea17ece       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   9e55a87730ee1       kube-controller-manager-default-k8s-diff-port-999693   kube-system
	3d2737d35156d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   b8cc35ef027bd       etcd-default-k8s-diff-port-999693                      kube-system
	
	
	==> coredns [78d2ca731e98befb02938c95d004c6de4e1bb290061976cb23bcd09a6b0139e5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58028 - 34348 "HINFO IN 773624347461218208.5993015123048664836. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.10228723s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-999693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-999693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=default-k8s-diff-port-999693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_52_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:51:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-999693
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:53:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:53:27 +0000   Sun, 19 Oct 2025 12:51:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:53:27 +0000   Sun, 19 Oct 2025 12:51:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:53:27 +0000   Sun, 19 Oct 2025 12:51:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:53:27 +0000   Sun, 19 Oct 2025 12:52:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-999693
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                dba8bd18-ed7d-4c69-88aa-2713b680a799
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-hftjp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-default-k8s-diff-port-999693                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-79bv6                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-default-k8s-diff-port-999693             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-999693    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-cjxjt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-default-k8s-diff-port-999693             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-668bp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bv5k2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node default-k8s-diff-port-999693 event: Registered Node default-k8s-diff-port-999693 in Controller
	  Normal  NodeReady                93s                kubelet          Node default-k8s-diff-port-999693 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 57s)  kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 57s)  kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 57s)  kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node default-k8s-diff-port-999693 event: Registered Node default-k8s-diff-port-999693 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [3d2737d35156d50ddf2521cf937a27d4a3882183759b5bedf15ae21799bc69b0] <==
	{"level":"warn","ts":"2025-10-19T12:52:55.808955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.818906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.826933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.834608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.841655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.849382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.857804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.866173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.873560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.881201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.890033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.897105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.904997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.913887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.920298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.927436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.935296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.945561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.957712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.972291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.978925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.989709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.998135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:56.005635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:56.062065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51230","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:53:50 up  2:36,  0 user,  load average: 2.90, 4.35, 3.03
	Linux default-k8s-diff-port-999693 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a511f79ffb7681fd929b4894c4f59a2a44ed69f557e9e40d7d67bdedd66fb6d] <==
	I1019 12:52:57.584708       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:52:57.585173       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 12:52:57.585363       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:52:57.585381       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:52:57.585407       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:52:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:52:57.857795       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:52:57.857860       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:52:57.857871       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:52:58.057565       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:52:58.358881       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:52:58.358984       1 metrics.go:72] Registering metrics
	I1019 12:52:58.359109       1 controller.go:711] "Syncing nftables rules"
	I1019 12:53:07.857906       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:53:07.857981       1 main.go:301] handling current node
	I1019 12:53:17.860496       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:53:17.860534       1 main.go:301] handling current node
	I1019 12:53:27.857553       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:53:27.857594       1 main.go:301] handling current node
	I1019 12:53:37.859511       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:53:37.859568       1 main.go:301] handling current node
	I1019 12:53:47.866646       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:53:47.866682       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dc93d8bd2fb474180164b7ca4cdad0cbca1bb12056f2ec0109f0fdd3eaff8e74] <==
	I1019 12:52:56.548751       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 12:52:56.548789       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:52:56.548847       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 12:52:56.548943       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 12:52:56.549236       1 aggregator.go:171] initial CRD sync complete...
	I1019 12:52:56.549271       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 12:52:56.549702       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 12:52:56.549743       1 cache.go:39] Caches are synced for autoregister controller
	E1019 12:52:56.553181       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 12:52:56.554296       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 12:52:56.576833       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 12:52:56.585608       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 12:52:56.585650       1 policy_source.go:240] refreshing policies
	I1019 12:52:56.675306       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:52:56.839307       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:52:56.867764       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:52:56.888522       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:52:56.897265       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:52:56.906743       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:52:56.956235       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.5.64"}
	I1019 12:52:56.969748       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.216.230"}
	I1019 12:52:57.454562       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:53:00.223463       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:53:00.323518       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:53:00.374380       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [386f63ea17ece706be504558369a24b364237cf65e614304f2e3a200660b929a] <==
	I1019 12:52:59.874698       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:52:59.875000       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 12:52:59.875051       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 12:52:59.875155       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 12:52:59.875189       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 12:52:59.875218       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 12:52:59.876058       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:52:59.876914       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:52:59.877761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:52:59.878224       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 12:52:59.880770       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 12:52:59.883047       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 12:52:59.883188       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 12:52:59.883351       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-999693"
	I1019 12:52:59.883415       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 12:52:59.884550       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 12:52:59.885556       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:52:59.887473       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 12:52:59.888932       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 12:52:59.890545       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 12:52:59.892978       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 12:52:59.899694       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 12:52:59.904970       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:52:59.905031       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 12:52:59.905045       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [dd65c0ffcffffaa62043de3c54111cd1ddf6293c650cbd534ce5438d3ee3e784] <==
	I1019 12:52:57.369275       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:52:57.438176       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:52:57.538526       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:52:57.538571       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 12:52:57.538731       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:52:57.566183       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:52:57.566247       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:52:57.573696       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:52:57.574216       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:52:57.574575       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:57.576999       1 config.go:200] "Starting service config controller"
	I1019 12:52:57.577805       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:52:57.577882       1 config.go:309] "Starting node config controller"
	I1019 12:52:57.577896       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:52:57.577903       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:52:57.577741       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:52:57.577994       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:52:57.577754       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:52:57.578022       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:52:57.678074       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 12:52:57.678111       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:52:57.678161       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7387a9f9039b6043f8b791c29478a2e313a9c1d07804c55f3bd42e18a02230e4] <==
	I1019 12:52:55.337561       1 serving.go:386] Generated self-signed cert in-memory
	I1019 12:52:56.522482       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 12:52:56.522504       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:56.527648       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 12:52:56.527699       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 12:52:56.527705       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:56.527730       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:56.527900       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:52:56.528209       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:52:56.528480       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:52:56.528568       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 12:52:56.628077       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 12:52:56.628131       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:56.628453       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:53:00 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:00.598122     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7z87\" (UniqueName: \"kubernetes.io/projected/ffe96798-7c36-44e9-9226-0fea7d9cba29-kube-api-access-w7z87\") pod \"kubernetes-dashboard-855c9754f9-bv5k2\" (UID: \"ffe96798-7c36-44e9-9226-0fea7d9cba29\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bv5k2"
	Oct 19 12:53:00 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:00.598153     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfvmm\" (UniqueName: \"kubernetes.io/projected/be6e9801-108a-4894-958e-283c60be7560-kube-api-access-pfvmm\") pod \"dashboard-metrics-scraper-6ffb444bf9-668bp\" (UID: \"be6e9801-108a-4894-958e-283c60be7560\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp"
	Oct 19 12:53:03 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:03.994230     716 scope.go:117] "RemoveContainer" containerID="a8d742844e3efb843d85972ded1da36c6bbb0cca4b7c2fc0ed2d1736642130f5"
	Oct 19 12:53:04 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:04.385388     716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 12:53:05 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:05.000710     716 scope.go:117] "RemoveContainer" containerID="a8d742844e3efb843d85972ded1da36c6bbb0cca4b7c2fc0ed2d1736642130f5"
	Oct 19 12:53:05 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:05.001268     716 scope.go:117] "RemoveContainer" containerID="9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4"
	Oct 19 12:53:05 default-k8s-diff-port-999693 kubelet[716]: E1019 12:53:05.001583     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-668bp_kubernetes-dashboard(be6e9801-108a-4894-958e-283c60be7560)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp" podUID="be6e9801-108a-4894-958e-283c60be7560"
	Oct 19 12:53:06 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:06.005277     716 scope.go:117] "RemoveContainer" containerID="9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4"
	Oct 19 12:53:06 default-k8s-diff-port-999693 kubelet[716]: E1019 12:53:06.005903     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-668bp_kubernetes-dashboard(be6e9801-108a-4894-958e-283c60be7560)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp" podUID="be6e9801-108a-4894-958e-283c60be7560"
	Oct 19 12:53:07 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:07.008337     716 scope.go:117] "RemoveContainer" containerID="9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4"
	Oct 19 12:53:07 default-k8s-diff-port-999693 kubelet[716]: E1019 12:53:07.008589     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-668bp_kubernetes-dashboard(be6e9801-108a-4894-958e-283c60be7560)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp" podUID="be6e9801-108a-4894-958e-283c60be7560"
	Oct 19 12:53:08 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:08.023418     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bv5k2" podStartSLOduration=1.371320181 podStartE2EDuration="8.023393643s" podCreationTimestamp="2025-10-19 12:53:00 +0000 UTC" firstStartedPulling="2025-10-19 12:53:00.834327714 +0000 UTC m=+7.006794712" lastFinishedPulling="2025-10-19 12:53:07.48640116 +0000 UTC m=+13.658868174" observedRunningTime="2025-10-19 12:53:08.023207883 +0000 UTC m=+14.195674898" watchObservedRunningTime="2025-10-19 12:53:08.023393643 +0000 UTC m=+14.195860658"
	Oct 19 12:53:21 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:21.931857     716 scope.go:117] "RemoveContainer" containerID="9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4"
	Oct 19 12:53:22 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:22.051119     716 scope.go:117] "RemoveContainer" containerID="9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4"
	Oct 19 12:53:22 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:22.051332     716 scope.go:117] "RemoveContainer" containerID="fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498"
	Oct 19 12:53:22 default-k8s-diff-port-999693 kubelet[716]: E1019 12:53:22.051546     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-668bp_kubernetes-dashboard(be6e9801-108a-4894-958e-283c60be7560)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp" podUID="be6e9801-108a-4894-958e-283c60be7560"
	Oct 19 12:53:25 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:25.760817     716 scope.go:117] "RemoveContainer" containerID="fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498"
	Oct 19 12:53:25 default-k8s-diff-port-999693 kubelet[716]: E1019 12:53:25.761044     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-668bp_kubernetes-dashboard(be6e9801-108a-4894-958e-283c60be7560)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp" podUID="be6e9801-108a-4894-958e-283c60be7560"
	Oct 19 12:53:28 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:28.070963     716 scope.go:117] "RemoveContainer" containerID="81423f1b546a04c25757a47a152f0daa3ca35543016899d310a2e1bdf2986375"
	Oct 19 12:53:38 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:38.931623     716 scope.go:117] "RemoveContainer" containerID="fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498"
	Oct 19 12:53:38 default-k8s-diff-port-999693 kubelet[716]: E1019 12:53:38.931870     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-668bp_kubernetes-dashboard(be6e9801-108a-4894-958e-283c60be7560)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp" podUID="be6e9801-108a-4894-958e-283c60be7560"
	Oct 19 12:53:48 default-k8s-diff-port-999693 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 12:53:48 default-k8s-diff-port-999693 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 12:53:48 default-k8s-diff-port-999693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 12:53:48 default-k8s-diff-port-999693 systemd[1]: kubelet.service: Consumed 1.710s CPU time.
	
	
	==> kubernetes-dashboard [1cd8bcfb5c309260593239de52b34e22550c164bb9abd93b219cb9e1a5bf0fbe] <==
	2025/10/19 12:53:07 Starting overwatch
	2025/10/19 12:53:07 Using namespace: kubernetes-dashboard
	2025/10/19 12:53:07 Using in-cluster config to connect to apiserver
	2025/10/19 12:53:07 Using secret token for csrf signing
	2025/10/19 12:53:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 12:53:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 12:53:07 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 12:53:07 Generating JWE encryption key
	2025/10/19 12:53:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 12:53:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 12:53:07 Initializing JWE encryption key from synchronized object
	2025/10/19 12:53:07 Creating in-cluster Sidecar client
	2025/10/19 12:53:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 12:53:07 Serving insecurely on HTTP port: 9090
	2025/10/19 12:53:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3958f67da799089d5c30b63ec7f53c85ee3a7cdf455396407624ee16e946961f] <==
	I1019 12:53:28.133221       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:53:28.141366       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:53:28.141455       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 12:53:28.143771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:31.599561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:35.861371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:39.459339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:42.512960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:45.535755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:45.540108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:53:45.540288       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:53:45.540461       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-999693_363feff1-c085-4b74-b573-caf2ed60c042!
	I1019 12:53:45.540469       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"94ffe2ba-d9f2-4be7-afb9-f7f386e949ce", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-999693_363feff1-c085-4b74-b573-caf2ed60c042 became leader
	W1019 12:53:45.542738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:45.547448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:53:45.641509       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-999693_363feff1-c085-4b74-b573-caf2ed60c042!
	W1019 12:53:47.550707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:47.555409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:49.559665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:49.564856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [81423f1b546a04c25757a47a152f0daa3ca35543016899d310a2e1bdf2986375] <==
	I1019 12:52:57.340214       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 12:53:27.348534       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693: exit status 2 (313.564367ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-999693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-999693
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-999693:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0",
	        "Created": "2025-10-19T12:51:45.922696096Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 664454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:52:47.156686363Z",
	            "FinishedAt": "2025-10-19T12:52:46.282964524Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0/hosts",
	        "LogPath": "/var/lib/docker/containers/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0/1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0-json.log",
	        "Name": "/default-k8s-diff-port-999693",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-999693:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-999693",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ece3120c0d2a544fd3f339a435cacc4be05ea60e7a9a421088ea1652ea505c0",
	                "LowerDir": "/var/lib/docker/overlay2/3d016932c7c0e15b8492434e9df816bb70a3f0d2bf447aee756582d31ab21f0c-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d016932c7c0e15b8492434e9df816bb70a3f0d2bf447aee756582d31ab21f0c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d016932c7c0e15b8492434e9df816bb70a3f0d2bf447aee756582d31ab21f0c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d016932c7c0e15b8492434e9df816bb70a3f0d2bf447aee756582d31ab21f0c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-999693",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-999693/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-999693",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-999693",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-999693",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fc2aaa49b456f7607ac4e4ba8ddbb8b60c8574c90462a4f4262df0f28545c55b",
	            "SandboxKey": "/var/run/docker/netns/fc2aaa49b456",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-999693": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:ae:10:af:56:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de90530a289272ed110d9eb21157ec5037120fb6575a550c928b9dda03629c85",
	                    "EndpointID": "d8036758ee0cd0ce979b22fbbfdf2bfe27bdd4d51a0d6be413cbaa73cc1b06fa",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-999693",
	                        "1ece3120c0d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693: exit status 2 (304.017282ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-999693 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-999693 logs -n 25: (1.043959553s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-123864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p embed-certs-123864 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-999693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-999693 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ addons  │ enable dashboard -p embed-certs-123864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-999693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ old-k8s-version-577062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p old-k8s-version-577062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ no-preload-561408 image list --format=json                                                                                                                                                                                                    │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p no-preload-561408 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ start   │ -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-190708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ stop    │ -p newest-cni-190708 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ embed-certs-123864 image list --format=json                                                                                                                                                                                                   │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p embed-certs-123864 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p embed-certs-123864                                                                                                                                                                                                                         │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ default-k8s-diff-port-999693 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p default-k8s-diff-port-999693 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p embed-certs-123864                                                                                                                                                                                                                         │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:53:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:53:11.615027  672737 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:53:11.615299  672737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:11.615311  672737 out.go:374] Setting ErrFile to fd 2...
	I1019 12:53:11.615315  672737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:11.615551  672737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:53:11.616038  672737 out.go:368] Setting JSON to false
	I1019 12:53:11.617746  672737 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9340,"bootTime":1760869052,"procs":566,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:53:11.617846  672737 start.go:141] virtualization: kvm guest
	I1019 12:53:11.619915  672737 out.go:179] * [newest-cni-190708] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:53:11.621699  672737 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:53:11.621736  672737 notify.go:220] Checking for updates...
	I1019 12:53:11.624129  672737 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:53:11.626246  672737 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:53:11.627453  672737 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:53:11.628681  672737 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:53:11.629995  672737 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:53:11.631642  672737 config.go:182] Loaded profile config "default-k8s-diff-port-999693": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:11.631786  672737 config.go:182] Loaded profile config "embed-certs-123864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:11.631990  672737 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:53:11.658136  672737 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:53:11.658233  672737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:11.722933  672737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-19 12:53:11.711540262 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:11.723046  672737 docker.go:318] overlay module found
	I1019 12:53:11.724874  672737 out.go:179] * Using the docker driver based on user configuration
	I1019 12:53:11.726372  672737 start.go:305] selected driver: docker
	I1019 12:53:11.726394  672737 start.go:925] validating driver "docker" against <nil>
	I1019 12:53:11.726412  672737 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:53:11.727020  672737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:11.787909  672737 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-19 12:53:11.778156597 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:11.788107  672737 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1019 12:53:11.788149  672737 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1019 12:53:11.788529  672737 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:53:11.790331  672737 out.go:179] * Using Docker driver with root privileges
	I1019 12:53:11.791430  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:11.791511  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:11.791528  672737 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1019 12:53:11.791587  672737 start.go:349] cluster config:
	{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:11.792873  672737 out.go:179] * Starting "newest-cni-190708" primary control-plane node in "newest-cni-190708" cluster
	I1019 12:53:11.794127  672737 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:53:11.795216  672737 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:53:11.796409  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:11.796465  672737 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:53:11.796477  672737 cache.go:58] Caching tarball of preloaded images
	I1019 12:53:11.796486  672737 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:53:11.796551  672737 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:53:11.796562  672737 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:53:11.796649  672737 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:11.796666  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json: {Name:mk458b42b0f9f21f6e5af311f76e8caf9c4c5efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:11.816881  672737 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:53:11.816898  672737 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:53:11.816920  672737 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:53:11.816943  672737 start.go:360] acquireMachinesLock for newest-cni-190708: {Name:mk77ff67117e187a78edba04cd47af082236de6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:53:11.817032  672737 start.go:364] duration metric: took 74.015µs to acquireMachinesLock for "newest-cni-190708"
	I1019 12:53:11.817054  672737 start.go:93] Provisioning new machine with config: &{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:53:11.817117  672737 start.go:125] createHost starting for "" (driver="docker")
	W1019 12:53:09.146473  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:11.146837  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:10.296323  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:12.795707  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:11.818963  672737 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1019 12:53:11.819197  672737 start.go:159] libmachine.API.Create for "newest-cni-190708" (driver="docker")
	I1019 12:53:11.819227  672737 client.go:168] LocalClient.Create starting
	I1019 12:53:11.819287  672737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem
	I1019 12:53:11.819320  672737 main.go:141] libmachine: Decoding PEM data...
	I1019 12:53:11.819338  672737 main.go:141] libmachine: Parsing certificate...
	I1019 12:53:11.819384  672737 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem
	I1019 12:53:11.819402  672737 main.go:141] libmachine: Decoding PEM data...
	I1019 12:53:11.819412  672737 main.go:141] libmachine: Parsing certificate...
	I1019 12:53:11.819803  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1019 12:53:11.837346  672737 cli_runner.go:211] docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1019 12:53:11.837404  672737 network_create.go:284] running [docker network inspect newest-cni-190708] to gather additional debugging logs...
	I1019 12:53:11.837466  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708
	W1019 12:53:11.853768  672737 cli_runner.go:211] docker network inspect newest-cni-190708 returned with exit code 1
	I1019 12:53:11.853794  672737 network_create.go:287] error running [docker network inspect newest-cni-190708]: docker network inspect newest-cni-190708: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-190708 not found
	I1019 12:53:11.853806  672737 network_create.go:289] output of [docker network inspect newest-cni-190708]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-190708 not found
	
	** /stderr **
	I1019 12:53:11.853902  672737 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:53:11.872131  672737 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a4629926c406 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:8c:3f:62:13:f6} reservation:<nil>}
	I1019 12:53:11.872777  672737 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6cccd776798e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:e6:1b:39:ab:6e:7b} reservation:<nil>}
	I1019 12:53:11.873176  672737 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-91914a6ce07e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:86:1c:aa:a8:a4:4a} reservation:<nil>}
	I1019 12:53:11.873710  672737 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fcd0a3e89589 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:54:90:aa:5c:46} reservation:<nil>}
	I1019 12:53:11.874346  672737 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-de90530a2892 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f2:1b:d3:5b:94:95} reservation:<nil>}
	I1019 12:53:11.875186  672737 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e7d700}
	I1019 12:53:11.875210  672737 network_create.go:124] attempt to create docker network newest-cni-190708 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1019 12:53:11.875256  672737 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-190708 newest-cni-190708
	I1019 12:53:11.933015  672737 network_create.go:108] docker network newest-cni-190708 192.168.94.0/24 created
	I1019 12:53:11.933049  672737 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-190708" container
	I1019 12:53:11.933120  672737 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1019 12:53:11.950774  672737 cli_runner.go:164] Run: docker volume create newest-cni-190708 --label name.minikube.sigs.k8s.io=newest-cni-190708 --label created_by.minikube.sigs.k8s.io=true
	I1019 12:53:11.967572  672737 oci.go:103] Successfully created a docker volume newest-cni-190708
	I1019 12:53:11.967650  672737 cli_runner.go:164] Run: docker run --rm --name newest-cni-190708-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-190708 --entrypoint /usr/bin/test -v newest-cni-190708:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1019 12:53:12.367353  672737 oci.go:107] Successfully prepared a docker volume newest-cni-190708
	I1019 12:53:12.367407  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:12.367450  672737 kic.go:194] Starting extracting preloaded images to volume ...
	I1019 12:53:12.367533  672737 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-190708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1019 12:53:13.646716  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:15.646757  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:15.295646  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:17.297846  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:16.825912  672737 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-190708:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.458335671s)
	I1019 12:53:16.825946  672737 kic.go:203] duration metric: took 4.45849341s to extract preloaded images to volume ...
	W1019 12:53:16.826042  672737 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1019 12:53:16.826073  672737 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1019 12:53:16.826110  672737 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1019 12:53:16.883735  672737 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-190708 --name newest-cni-190708 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-190708 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-190708 --network newest-cni-190708 --ip 192.168.94.2 --volume newest-cni-190708:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1019 12:53:17.149721  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Running}}
	I1019 12:53:17.168092  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.187070  672737 cli_runner.go:164] Run: docker exec newest-cni-190708 stat /var/lib/dpkg/alternatives/iptables
	I1019 12:53:17.235594  672737 oci.go:144] the created container "newest-cni-190708" has a running status.
	I1019 12:53:17.235624  672737 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa...
	I1019 12:53:17.641114  672737 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1019 12:53:17.666983  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.686164  672737 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1019 12:53:17.686197  672737 kic_runner.go:114] Args: [docker exec --privileged newest-cni-190708 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1019 12:53:17.730607  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:17.748800  672737 machine.go:93] provisionDockerMachine start ...
	I1019 12:53:17.748886  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:17.768809  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:17.769043  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:17.769056  672737 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:53:17.904434  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:53:17.904466  672737 ubuntu.go:182] provisioning hostname "newest-cni-190708"
	I1019 12:53:17.904532  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:17.923140  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:17.923351  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:17.923364  672737 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-190708 && echo "newest-cni-190708" | sudo tee /etc/hostname
	I1019 12:53:18.066330  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:53:18.066401  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.084720  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:18.084937  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:18.084955  672737 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-190708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-190708/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-190708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:53:18.218215  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:53:18.218243  672737 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:53:18.218295  672737 ubuntu.go:190] setting up certificates
	I1019 12:53:18.218310  672737 provision.go:84] configureAuth start
	I1019 12:53:18.218377  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:18.236696  672737 provision.go:143] copyHostCerts
	I1019 12:53:18.236757  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:53:18.236768  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:53:18.236836  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:53:18.236929  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:53:18.236938  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:53:18.236966  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:53:18.237022  672737 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:53:18.237030  672737 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:53:18.237052  672737 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:53:18.237101  672737 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.newest-cni-190708 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-190708]
	I1019 12:53:18.349002  672737 provision.go:177] copyRemoteCerts
	I1019 12:53:18.349061  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:53:18.349100  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.367380  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:18.464934  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:53:18.484736  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:53:18.502418  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 12:53:18.520374  672737 provision.go:87] duration metric: took 302.043863ms to configureAuth
	I1019 12:53:18.520411  672737 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:53:18.520616  672737 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:18.520715  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.539107  672737 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:18.539337  672737 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33500 <nil> <nil>}
	I1019 12:53:18.539356  672737 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:53:18.783336  672737 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:53:18.783368  672737 machine.go:96] duration metric: took 1.034543859s to provisionDockerMachine
	I1019 12:53:18.783380  672737 client.go:171] duration metric: took 6.964145323s to LocalClient.Create
	I1019 12:53:18.783403  672737 start.go:167] duration metric: took 6.964207211s to libmachine.API.Create "newest-cni-190708"
	I1019 12:53:18.783410  672737 start.go:293] postStartSetup for "newest-cni-190708" (driver="docker")
	I1019 12:53:18.783444  672737 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:53:18.783533  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:53:18.783575  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.802276  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:18.904329  672737 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:53:18.908177  672737 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:53:18.908210  672737 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:53:18.908222  672737 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:53:18.908267  672737 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:53:18.908346  672737 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:53:18.908470  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:53:18.916278  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:53:18.940533  672737 start.go:296] duration metric: took 157.106831ms for postStartSetup
	I1019 12:53:18.940837  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:18.959008  672737 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:18.959254  672737 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:53:18.959294  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:18.976265  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.069698  672737 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:53:19.074565  672737 start.go:128] duration metric: took 7.257430988s to createHost
	I1019 12:53:19.074635  672737 start.go:83] releasing machines lock for "newest-cni-190708", held for 7.257591431s
	I1019 12:53:19.074702  672737 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:19.092846  672737 ssh_runner.go:195] Run: cat /version.json
	I1019 12:53:19.092896  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:19.092920  672737 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:53:19.092980  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:19.112049  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.112296  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:19.259186  672737 ssh_runner.go:195] Run: systemctl --version
	I1019 12:53:19.265848  672737 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:53:19.301474  672737 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:53:19.306225  672737 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:53:19.306297  672737 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:53:19.331979  672737 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 12:53:19.332008  672737 start.go:495] detecting cgroup driver to use...
	I1019 12:53:19.332048  672737 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:53:19.332111  672737 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:53:19.348084  672737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:53:19.360773  672737 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:53:19.360844  672737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:53:19.377948  672737 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:53:19.395822  672737 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:53:19.484678  672737 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:53:19.575544  672737 docker.go:234] disabling docker service ...
	I1019 12:53:19.575618  672737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:53:19.595378  672737 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:53:19.608092  672737 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:53:19.693958  672737 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:53:19.776371  672737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:53:19.789375  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:53:19.804627  672737 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:53:19.804704  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.814787  672737 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:53:19.814837  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.823551  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.832169  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.840784  672737 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:53:19.848724  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.857100  672737 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.870352  672737 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:53:19.878731  672737 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:53:19.886348  672737 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:53:19.893759  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:19.973321  672737 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:53:20.077881  672737 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:53:20.077979  672737 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:53:20.082037  672737 start.go:563] Will wait 60s for crictl version
	I1019 12:53:20.082093  672737 ssh_runner.go:195] Run: which crictl
	I1019 12:53:20.085569  672737 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:53:20.109837  672737 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:53:20.109920  672737 ssh_runner.go:195] Run: crio --version
	I1019 12:53:20.138350  672737 ssh_runner.go:195] Run: crio --version
	I1019 12:53:20.168482  672737 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:53:20.169863  672737 cli_runner.go:164] Run: docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:53:20.188025  672737 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1019 12:53:20.192265  672737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:53:20.203815  672737 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 12:53:20.205047  672737 kubeadm.go:883] updating cluster {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:53:20.205149  672737 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:20.205199  672737 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:53:20.236514  672737 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:53:20.236536  672737 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:53:20.236581  672737 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:53:20.262051  672737 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:53:20.262073  672737 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:53:20.262080  672737 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1019 12:53:20.262171  672737 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-190708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:53:20.262247  672737 ssh_runner.go:195] Run: crio config
	I1019 12:53:20.309916  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:20.309950  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:20.309973  672737 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 12:53:20.310003  672737 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190708 NodeName:newest-cni-190708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:53:20.310145  672737 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-190708"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:53:20.310214  672737 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:53:20.318657  672737 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:53:20.318731  672737 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:53:20.326554  672737 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 12:53:20.339030  672737 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:53:20.354155  672737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1019 12:53:20.366696  672737 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:53:20.370356  672737 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:53:20.380455  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:20.458942  672737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:53:20.485015  672737 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708 for IP: 192.168.94.2
	I1019 12:53:20.485043  672737 certs.go:195] generating shared ca certs ...
	I1019 12:53:20.485070  672737 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.485221  672737 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:53:20.485264  672737 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:53:20.485275  672737 certs.go:257] generating profile certs ...
	I1019 12:53:20.485328  672737 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key
	I1019 12:53:20.485348  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt with IP's: []
	I1019 12:53:20.585551  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt ...
	I1019 12:53:20.585580  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.crt: {Name:mk5251db26990dc5997b9e5853758832f57cf196 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.585769  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key ...
	I1019 12:53:20.585781  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key: {Name:mk05802bac0f3e5b3a8b334617d45fe07eee0068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.585867  672737 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd
	I1019 12:53:20.585883  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1019 12:53:20.684366  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd ...
	I1019 12:53:20.684395  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd: {Name:mk395ac2723daa6eac9a1a5448aa56dcc3dae795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.684562  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd ...
	I1019 12:53:20.684576  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd: {Name:mk1d126d0c5513551abbae58673dc597e26ffe4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.684650  672737 certs.go:382] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt.6779a6bd -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt
	I1019 12:53:20.684722  672737 certs.go:386] copying /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd -> /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key
	I1019 12:53:20.684776  672737 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key
	I1019 12:53:20.684791  672737 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt with IP's: []
	I1019 12:53:20.821306  672737 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt ...
	I1019 12:53:20.821336  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt: {Name:mkf04fb8bbf161179ae86ba91d4a80f873fae21e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.821524  672737 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key ...
	I1019 12:53:20.821544  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key: {Name:mk22ac123e8932e8db98bd277997b637ec873079 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:20.821743  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:53:20.821779  672737 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:53:20.821789  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:53:20.821812  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:53:20.821834  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:53:20.821860  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:53:20.821901  672737 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:53:20.822529  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:53:20.843244  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:53:20.860464  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:53:20.877640  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:53:20.895480  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 12:53:20.912797  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:53:20.929757  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:53:20.947521  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:53:20.964869  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:53:20.984248  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:53:21.003061  672737 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:53:21.020532  672737 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:53:21.033435  672737 ssh_runner.go:195] Run: openssl version
	I1019 12:53:21.040056  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:53:21.049001  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.052716  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.052781  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:53:21.088149  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:53:21.097154  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:53:21.105495  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.109154  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.109216  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:53:21.144296  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:53:21.153347  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:53:21.161940  672737 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.165605  672737 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.165655  672737 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:53:21.199345  672737 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:53:21.208215  672737 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:53:21.212056  672737 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 12:53:21.212119  672737 kubeadm.go:400] StartCluster: {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:21.212215  672737 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:53:21.212265  672737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:53:21.240234  672737 cri.go:89] found id: ""
	I1019 12:53:21.240301  672737 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:53:21.248582  672737 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 12:53:21.256728  672737 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1019 12:53:21.256801  672737 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 12:53:21.265096  672737 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 12:53:21.265135  672737 kubeadm.go:157] found existing configuration files:
	
	I1019 12:53:21.265192  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 12:53:21.273544  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 12:53:21.273612  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 12:53:21.282090  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 12:53:21.290396  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 12:53:21.290490  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 12:53:21.300201  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 12:53:21.308252  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 12:53:21.308306  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 12:53:21.315749  672737 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 12:53:21.323167  672737 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 12:53:21.323239  672737 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 12:53:21.330315  672737 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1019 12:53:21.369107  672737 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1019 12:53:21.369180  672737 kubeadm.go:318] [preflight] Running pre-flight checks
	I1019 12:53:21.390319  672737 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1019 12:53:21.390379  672737 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1019 12:53:21.390409  672737 kubeadm.go:318] OS: Linux
	I1019 12:53:21.390480  672737 kubeadm.go:318] CGROUPS_CPU: enabled
	I1019 12:53:21.390540  672737 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1019 12:53:21.390652  672737 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1019 12:53:21.390735  672737 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1019 12:53:21.390790  672737 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1019 12:53:21.390890  672737 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1019 12:53:21.390973  672737 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1019 12:53:21.391026  672737 kubeadm.go:318] CGROUPS_IO: enabled
	I1019 12:53:21.449690  672737 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 12:53:21.449859  672737 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 12:53:21.449988  672737 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 12:53:21.458017  672737 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 12:53:21.459979  672737 out.go:252]   - Generating certificates and keys ...
	I1019 12:53:21.460084  672737 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1019 12:53:21.460184  672737 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1019 12:53:17.646821  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:19.647689  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:19.795394  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:21.795584  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	W1019 12:53:23.796166  663517 pod_ready.go:104] pod "coredns-66bc5c9577-bw9l4" is not "Ready", error: <nil>
	I1019 12:53:21.782609  672737 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 12:53:22.004817  672737 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1019 12:53:22.154911  672737 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1019 12:53:22.730145  672737 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1019 12:53:22.932723  672737 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1019 12:53:22.932904  672737 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-190708] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 12:53:23.243959  672737 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1019 12:53:23.244120  672737 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-190708] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1019 12:53:23.410854  672737 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 12:53:23.472366  672737 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 12:53:23.643869  672737 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1019 12:53:23.644033  672737 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 12:53:23.711987  672737 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 12:53:24.037993  672737 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 12:53:24.501726  672737 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 12:53:24.744523  672737 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 12:53:24.859147  672737 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 12:53:24.859688  672737 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 12:53:24.863264  672737 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 12:53:24.864642  672737 out.go:252]   - Booting up control plane ...
	I1019 12:53:24.864730  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 12:53:24.864796  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 12:53:24.865498  672737 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 12:53:24.879079  672737 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 12:53:24.879207  672737 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 12:53:24.886821  672737 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 12:53:24.887101  672737 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 12:53:24.887199  672737 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1019 12:53:24.983491  672737 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 12:53:24.983708  672737 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 12:53:25.984614  672737 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001307224s
	I1019 12:53:25.988599  672737 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 12:53:25.988724  672737 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1019 12:53:25.988848  672737 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 12:53:25.988960  672737 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1019 12:53:22.146944  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:24.647501  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:26.295683  663517 pod_ready.go:94] pod "coredns-66bc5c9577-bw9l4" is "Ready"
	I1019 12:53:26.295713  663517 pod_ready.go:86] duration metric: took 31.505627238s for pod "coredns-66bc5c9577-bw9l4" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.297917  663517 pod_ready.go:83] waiting for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.301953  663517 pod_ready.go:94] pod "etcd-embed-certs-123864" is "Ready"
	I1019 12:53:26.301978  663517 pod_ready.go:86] duration metric: took 4.035262ms for pod "etcd-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.304112  663517 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.308120  663517 pod_ready.go:94] pod "kube-apiserver-embed-certs-123864" is "Ready"
	I1019 12:53:26.308144  663517 pod_ready.go:86] duration metric: took 4.009533ms for pod "kube-apiserver-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.309999  663517 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.494192  663517 pod_ready.go:94] pod "kube-controller-manager-embed-certs-123864" is "Ready"
	I1019 12:53:26.494219  663517 pod_ready.go:86] duration metric: took 184.199033ms for pod "kube-controller-manager-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:26.694487  663517 pod_ready.go:83] waiting for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.094397  663517 pod_ready.go:94] pod "kube-proxy-gvrcz" is "Ready"
	I1019 12:53:27.094457  663517 pod_ready.go:86] duration metric: took 399.93585ms for pod "kube-proxy-gvrcz" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.293675  663517 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.694119  663517 pod_ready.go:94] pod "kube-scheduler-embed-certs-123864" is "Ready"
	I1019 12:53:27.694146  663517 pod_ready.go:86] duration metric: took 400.447048ms for pod "kube-scheduler-embed-certs-123864" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:27.694158  663517 pod_ready.go:40] duration metric: took 32.912525222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:53:27.746279  663517 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:27.748237  663517 out.go:179] * Done! kubectl is now configured to use "embed-certs-123864" cluster and "default" namespace by default
	I1019 12:53:27.518915  672737 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.530228054s
	I1019 12:53:28.053793  672737 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.061152071s
	I1019 12:53:29.990081  672737 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001429284s
	I1019 12:53:30.001867  672737 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 12:53:30.014037  672737 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 12:53:30.024140  672737 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 12:53:30.024456  672737 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-190708 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 12:53:30.033264  672737 kubeadm.go:318] [bootstrap-token] Using token: gtkds1.9e0h8pmw5r5mqwja
	I1019 12:53:30.034587  672737 out.go:252]   - Configuring RBAC rules ...
	I1019 12:53:30.034754  672737 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 12:53:30.038773  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 12:53:30.045039  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 12:53:30.049009  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 12:53:30.052044  672737 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 12:53:30.054665  672737 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 12:53:30.397490  672737 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 12:53:30.827821  672737 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1019 12:53:31.396481  672737 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1019 12:53:31.397310  672737 kubeadm.go:318] 
	I1019 12:53:31.397402  672737 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1019 12:53:31.397413  672737 kubeadm.go:318] 
	I1019 12:53:31.397551  672737 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1019 12:53:31.397565  672737 kubeadm.go:318] 
	I1019 12:53:31.397596  672737 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1019 12:53:31.397650  672737 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 12:53:31.397698  672737 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 12:53:31.397705  672737 kubeadm.go:318] 
	I1019 12:53:31.397749  672737 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1019 12:53:31.397755  672737 kubeadm.go:318] 
	I1019 12:53:31.397794  672737 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 12:53:31.397800  672737 kubeadm.go:318] 
	I1019 12:53:31.397861  672737 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1019 12:53:31.397953  672737 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 12:53:31.398040  672737 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 12:53:31.398051  672737 kubeadm.go:318] 
	I1019 12:53:31.398140  672737 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 12:53:31.398207  672737 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1019 12:53:31.398213  672737 kubeadm.go:318] 
	I1019 12:53:31.398292  672737 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token gtkds1.9e0h8pmw5r5mqwja \
	I1019 12:53:31.398378  672737 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 \
	I1019 12:53:31.398399  672737 kubeadm.go:318] 	--control-plane 
	I1019 12:53:31.398405  672737 kubeadm.go:318] 
	I1019 12:53:31.398523  672737 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1019 12:53:31.398534  672737 kubeadm.go:318] 
	I1019 12:53:31.398627  672737 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token gtkds1.9e0h8pmw5r5mqwja \
	I1019 12:53:31.398790  672737 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:cd3cedbdf6f2c7985466751bd0aead39c45709d322b3cd2a3b700fa4ff682933 
	I1019 12:53:31.401824  672737 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1019 12:53:31.402002  672737 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 12:53:31.402023  672737 cni.go:84] Creating CNI manager for ""
	I1019 12:53:31.402032  672737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:31.403960  672737 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1019 12:53:31.405314  672737 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1019 12:53:31.410474  672737 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1019 12:53:31.410496  672737 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1019 12:53:31.424273  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1019 12:53:27.147074  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:29.645647  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	W1019 12:53:31.646857  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:31.641912  672737 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 12:53:31.642008  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:31.642011  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-190708 minikube.k8s.io/updated_at=2025_10_19T12_53_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99 minikube.k8s.io/name=newest-cni-190708 minikube.k8s.io/primary=true
	I1019 12:53:31.652529  672737 ops.go:34] apiserver oom_adj: -16
	I1019 12:53:31.718996  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:32.219629  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:32.719834  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:33.219813  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:33.719692  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:34.219076  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:34.719433  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.219917  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.719034  672737 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 12:53:35.785029  672737 kubeadm.go:1113] duration metric: took 4.143080971s to wait for elevateKubeSystemPrivileges
	I1019 12:53:35.785068  672737 kubeadm.go:402] duration metric: took 14.57295181s to StartCluster
	I1019 12:53:35.785101  672737 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:35.785174  672737 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:53:35.787497  672737 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:53:35.787794  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 12:53:35.787820  672737 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:53:35.787897  672737 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:53:35.787993  672737 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-190708"
	I1019 12:53:35.788017  672737 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-190708"
	I1019 12:53:35.788020  672737 addons.go:69] Setting default-storageclass=true in profile "newest-cni-190708"
	I1019 12:53:35.788053  672737 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-190708"
	I1019 12:53:35.788062  672737 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:35.788057  672737 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:53:35.788500  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.788555  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.789512  672737 out.go:179] * Verifying Kubernetes components...
	I1019 12:53:35.791378  672737 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:53:35.812380  672737 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1019 12:53:33.646988  664256 pod_ready.go:104] pod "coredns-66bc5c9577-hftjp" is not "Ready", error: <nil>
	I1019 12:53:34.648076  664256 pod_ready.go:94] pod "coredns-66bc5c9577-hftjp" is "Ready"
	I1019 12:53:34.648104  664256 pod_ready.go:86] duration metric: took 36.507165259s for pod "coredns-66bc5c9577-hftjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.650741  664256 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.654523  664256 pod_ready.go:94] pod "etcd-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.654547  664256 pod_ready.go:86] duration metric: took 3.785206ms for pod "etcd-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.656429  664256 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.660685  664256 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.660712  664256 pod_ready.go:86] duration metric: took 4.258461ms for pod "kube-apiserver-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.662348  664256 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:34.844857  664256 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:34.844886  664256 pod_ready.go:86] duration metric: took 182.521582ms for pod "kube-controller-manager-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.044783  664256 pod_ready.go:83] waiting for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.445005  664256 pod_ready.go:94] pod "kube-proxy-cjxjt" is "Ready"
	I1019 12:53:35.445031  664256 pod_ready.go:86] duration metric: took 400.222332ms for pod "kube-proxy-cjxjt" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:35.645060  664256 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:36.045246  664256 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-999693" is "Ready"
	I1019 12:53:36.045282  664256 pod_ready.go:86] duration metric: took 400.190569ms for pod "kube-scheduler-default-k8s-diff-port-999693" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 12:53:36.045298  664256 pod_ready.go:40] duration metric: took 37.908676389s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 12:53:36.105764  664256 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:36.108299  664256 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-999693" cluster and "default" namespace by default
	I1019 12:53:35.813186  672737 addons.go:238] Setting addon default-storageclass=true in "newest-cni-190708"
	I1019 12:53:35.813237  672737 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:53:35.813735  672737 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:35.815209  672737 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:53:35.815225  672737 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:53:35.815282  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:35.843451  672737 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:53:35.843479  672737 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:53:35.843567  672737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:35.844218  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:35.868726  672737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33500 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:53:35.877614  672737 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 12:53:35.929249  672737 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:53:35.955142  672737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:53:35.988275  672737 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:53:36.052147  672737 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1019 12:53:36.053790  672737 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:53:36.053847  672737 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:53:36.305744  672737 api_server.go:72] duration metric: took 517.881771ms to wait for apiserver process to appear ...
	I1019 12:53:36.305769  672737 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:53:36.305790  672737 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:53:36.310834  672737 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1019 12:53:36.311767  672737 api_server.go:141] control plane version: v1.34.1
	I1019 12:53:36.311798  672737 api_server.go:131] duration metric: took 6.020737ms to wait for apiserver health ...
	I1019 12:53:36.311809  672737 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:53:36.313872  672737 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 12:53:36.314880  672737 system_pods.go:59] 8 kube-system pods found
	I1019 12:53:36.314917  672737 system_pods.go:61] "coredns-66bc5c9577-kp55x" [9a472ee8-8fcb-410c-92d0-6f82b4bacad7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:53:36.314933  672737 system_pods.go:61] "etcd-newest-cni-190708" [2105393f-0676-49e0-aa1c-5efd62f5148c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:53:36.314945  672737 system_pods.go:61] "kindnet-8bb9r" [eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:53:36.314955  672737 system_pods.go:61] "kube-apiserver-newest-cni-190708" [6f2a10a0-1e97-46ef-831c-c648f1ead906] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:53:36.314961  672737 system_pods.go:61] "kube-controller-manager-newest-cni-190708" [2fd054d9-c518-4415-8279-b247bb13d91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:53:36.314969  672737 system_pods.go:61] "kube-proxy-v7xgj" [9620c4c3-352a-4d93-8d43-f7a06fcd3374] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:53:36.314976  672737 system_pods.go:61] "kube-scheduler-newest-cni-190708" [8d1175ee-58dc-471d-856b-87d65a82c0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:53:36.314981  672737 system_pods.go:61] "storage-provisioner" [d9659c6a-9cea-4234-aaf7-baafb55fcf58] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:53:36.314992  672737 system_pods.go:74] duration metric: took 3.173905ms to wait for pod list to return data ...
	I1019 12:53:36.315000  672737 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:53:36.315055  672737 addons.go:514] duration metric: took 527.155312ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 12:53:36.317196  672737 default_sa.go:45] found service account: "default"
	I1019 12:53:36.317218  672737 default_sa.go:55] duration metric: took 2.212206ms for default service account to be created ...
	I1019 12:53:36.317230  672737 kubeadm.go:586] duration metric: took 529.375092ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:53:36.317251  672737 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:53:36.319523  672737 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:53:36.319545  672737 node_conditions.go:123] node cpu capacity is 8
	I1019 12:53:36.319557  672737 node_conditions.go:105] duration metric: took 2.300039ms to run NodePressure ...
	I1019 12:53:36.319567  672737 start.go:241] waiting for startup goroutines ...
	I1019 12:53:36.557265  672737 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-190708" context rescaled to 1 replicas
	I1019 12:53:36.557311  672737 start.go:246] waiting for cluster config update ...
	I1019 12:53:36.557328  672737 start.go:255] writing updated cluster config ...
	I1019 12:53:36.557703  672737 ssh_runner.go:195] Run: rm -f paused
	I1019 12:53:36.609706  672737 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:53:36.612691  672737 out.go:179] * Done! kubectl is now configured to use "newest-cni-190708" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:53:07 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:07.88483956Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 19 12:53:07 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:07.892886267Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 19 12:53:07 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:07.892921327Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.932329695Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8b3e74ed-4f85-482c-87f4-a76ef9aa9099 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.935681047Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b396f35a-7120-4a13-a27c-32c5b73a6685 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.938874512Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp/dashboard-metrics-scraper" id=f22b1c94-815e-4447-9541-b6608a341051 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.94067125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.947666157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.948284651Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.9766739Z" level=info msg="Created container fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp/dashboard-metrics-scraper" id=f22b1c94-815e-4447-9541-b6608a341051 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.977301467Z" level=info msg="Starting container: fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498" id=53e54053-4a21-4a06-afce-ece0565d6426 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:53:21 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:21.979283846Z" level=info msg="Started container" PID=1741 containerID=fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp/dashboard-metrics-scraper id=53e54053-4a21-4a06-afce-ece0565d6426 name=/runtime.v1.RuntimeService/StartContainer sandboxID=948ed0e57e10af1077e46c4f3013445ab8657025398b6c9deb2ecca75846eedd
	Oct 19 12:53:22 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:22.05253305Z" level=info msg="Removing container: 9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4" id=b4478149-38c6-4304-8ca5-bc8587910e08 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:53:22 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:22.063013739Z" level=info msg="Removed container 9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp/dashboard-metrics-scraper" id=b4478149-38c6-4304-8ca5-bc8587910e08 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.071441936Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2066969c-69c0-4b5e-b886-2de43b3264b1 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.072654733Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c1a1bb9c-ecdf-46d6-b1ca-4940e286f6f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.074546867Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=681e82bf-0cc6-4a83-8f5e-954dc4c7b7c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.074969232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.079694523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.079900341Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/009fbe1bc0de7f6b617f455576b723f8436c8152497a23fdc6f0ad8621b2b009/merged/etc/passwd: no such file or directory"
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.079931687Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/009fbe1bc0de7f6b617f455576b723f8436c8152497a23fdc6f0ad8621b2b009/merged/etc/group: no such file or directory"
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.080201445Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.116665943Z" level=info msg="Created container 3958f67da799089d5c30b63ec7f53c85ee3a7cdf455396407624ee16e946961f: kube-system/storage-provisioner/storage-provisioner" id=681e82bf-0cc6-4a83-8f5e-954dc4c7b7c1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.117391488Z" level=info msg="Starting container: 3958f67da799089d5c30b63ec7f53c85ee3a7cdf455396407624ee16e946961f" id=cec976e7-ac10-4e69-8268-4c3466fa0e21 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:53:28 default-k8s-diff-port-999693 crio[560]: time="2025-10-19T12:53:28.12097517Z" level=info msg="Started container" PID=1755 containerID=3958f67da799089d5c30b63ec7f53c85ee3a7cdf455396407624ee16e946961f description=kube-system/storage-provisioner/storage-provisioner id=cec976e7-ac10-4e69-8268-4c3466fa0e21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bf31ff487ae9f3e44a9997e108b3767b012e2395e8dc253f5e1e71f2a0fd1473
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	3958f67da7990       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   bf31ff487ae9f       storage-provisioner                                    kube-system
	fdc334ceb1fdf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   948ed0e57e10a       dashboard-metrics-scraper-6ffb444bf9-668bp             kubernetes-dashboard
	1cd8bcfb5c309       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   b3cd80fdf703c       kubernetes-dashboard-855c9754f9-bv5k2                  kubernetes-dashboard
	78d2ca731e98b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   6f7a0a00feef2       coredns-66bc5c9577-hftjp                               kube-system
	b64ed35b37aa5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   6a7e6373a7bc2       busybox                                                default
	81423f1b546a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   bf31ff487ae9f       storage-provisioner                                    kube-system
	1a511f79ffb76       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   c75f8116f699e       kindnet-79bv6                                          kube-system
	dd65c0ffcffff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   44d3ace69f67c       kube-proxy-cjxjt                                       kube-system
	7387a9f9039b6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   10d5a8cf5f66b       kube-scheduler-default-k8s-diff-port-999693            kube-system
	dc93d8bd2fb47       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   6b4bb05fe90f7       kube-apiserver-default-k8s-diff-port-999693            kube-system
	386f63ea17ece       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   9e55a87730ee1       kube-controller-manager-default-k8s-diff-port-999693   kube-system
	3d2737d35156d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   b8cc35ef027bd       etcd-default-k8s-diff-port-999693                      kube-system
	
	
	==> coredns [78d2ca731e98befb02938c95d004c6de4e1bb290061976cb23bcd09a6b0139e5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58028 - 34348 "HINFO IN 773624347461218208.5993015123048664836. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.10228723s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-999693
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-999693
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=default-k8s-diff-port-999693
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_52_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:51:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-999693
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:53:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:53:27 +0000   Sun, 19 Oct 2025 12:51:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:53:27 +0000   Sun, 19 Oct 2025 12:51:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:53:27 +0000   Sun, 19 Oct 2025 12:51:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 12:53:27 +0000   Sun, 19 Oct 2025 12:52:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-999693
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                dba8bd18-ed7d-4c69-88aa-2713b680a799
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-hftjp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-default-k8s-diff-port-999693                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-79bv6                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-default-k8s-diff-port-999693             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-999693    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-cjxjt                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-default-k8s-diff-port-999693             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-668bp              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bv5k2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node default-k8s-diff-port-999693 event: Registered Node default-k8s-diff-port-999693 in Controller
	  Normal  NodeReady                95s                kubelet          Node default-k8s-diff-port-999693 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 59s)  kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 59s)  kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 59s)  kubelet          Node default-k8s-diff-port-999693 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node default-k8s-diff-port-999693 event: Registered Node default-k8s-diff-port-999693 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [3d2737d35156d50ddf2521cf937a27d4a3882183759b5bedf15ae21799bc69b0] <==
	{"level":"warn","ts":"2025-10-19T12:52:55.808955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.818906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.826933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.834608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.841655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.849382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.857804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.866173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.873560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.881201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.890033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.897105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.904997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.913887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.920298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.927436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.935296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.945561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.957712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.972291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.978925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.989709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:55.998135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:56.005635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:52:56.062065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51230","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:53:52 up  2:36,  0 user,  load average: 2.90, 4.35, 3.03
	Linux default-k8s-diff-port-999693 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a511f79ffb7681fd929b4894c4f59a2a44ed69f557e9e40d7d67bdedd66fb6d] <==
	I1019 12:52:57.584708       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:52:57.585173       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1019 12:52:57.585363       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:52:57.585381       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:52:57.585407       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:52:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:52:57.857795       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:52:57.857860       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:52:57.857871       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:52:58.057565       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:52:58.358881       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:52:58.358984       1 metrics.go:72] Registering metrics
	I1019 12:52:58.359109       1 controller.go:711] "Syncing nftables rules"
	I1019 12:53:07.857906       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:53:07.857981       1 main.go:301] handling current node
	I1019 12:53:17.860496       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:53:17.860534       1 main.go:301] handling current node
	I1019 12:53:27.857553       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:53:27.857594       1 main.go:301] handling current node
	I1019 12:53:37.859511       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:53:37.859568       1 main.go:301] handling current node
	I1019 12:53:47.866646       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1019 12:53:47.866682       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dc93d8bd2fb474180164b7ca4cdad0cbca1bb12056f2ec0109f0fdd3eaff8e74] <==
	I1019 12:52:56.548751       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 12:52:56.548789       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:52:56.548847       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 12:52:56.548943       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 12:52:56.549236       1 aggregator.go:171] initial CRD sync complete...
	I1019 12:52:56.549271       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 12:52:56.549702       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 12:52:56.549743       1 cache.go:39] Caches are synced for autoregister controller
	E1019 12:52:56.553181       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 12:52:56.554296       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 12:52:56.576833       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 12:52:56.585608       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1019 12:52:56.585650       1 policy_source.go:240] refreshing policies
	I1019 12:52:56.675306       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:52:56.839307       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:52:56.867764       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:52:56.888522       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:52:56.897265       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:52:56.906743       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:52:56.956235       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.5.64"}
	I1019 12:52:56.969748       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.216.230"}
	I1019 12:52:57.454562       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:53:00.223463       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:53:00.323518       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:53:00.374380       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [386f63ea17ece706be504558369a24b364237cf65e614304f2e3a200660b929a] <==
	I1019 12:52:59.874698       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 12:52:59.875000       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 12:52:59.875051       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 12:52:59.875155       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 12:52:59.875189       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 12:52:59.875218       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 12:52:59.876058       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:52:59.876914       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:52:59.877761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:52:59.878224       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1019 12:52:59.880770       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 12:52:59.883047       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 12:52:59.883188       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 12:52:59.883351       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-999693"
	I1019 12:52:59.883415       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 12:52:59.884550       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 12:52:59.885556       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:52:59.887473       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1019 12:52:59.888932       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1019 12:52:59.890545       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1019 12:52:59.892978       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 12:52:59.899694       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1019 12:52:59.904970       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:52:59.905031       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 12:52:59.905045       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [dd65c0ffcffffaa62043de3c54111cd1ddf6293c650cbd534ce5438d3ee3e784] <==
	I1019 12:52:57.369275       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:52:57.438176       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:52:57.538526       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:52:57.538571       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1019 12:52:57.538731       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:52:57.566183       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:52:57.566247       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:52:57.573696       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:52:57.574216       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:52:57.574575       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:57.576999       1 config.go:200] "Starting service config controller"
	I1019 12:52:57.577805       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:52:57.577882       1 config.go:309] "Starting node config controller"
	I1019 12:52:57.577896       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:52:57.577903       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:52:57.577741       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:52:57.577994       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:52:57.577754       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:52:57.578022       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:52:57.678074       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 12:52:57.678111       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:52:57.678161       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7387a9f9039b6043f8b791c29478a2e313a9c1d07804c55f3bd42e18a02230e4] <==
	I1019 12:52:55.337561       1 serving.go:386] Generated self-signed cert in-memory
	I1019 12:52:56.522482       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 12:52:56.522504       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:52:56.527648       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 12:52:56.527699       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 12:52:56.527705       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:56.527730       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:56.527900       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:52:56.528209       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:52:56.528480       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:52:56.528568       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 12:52:56.628077       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 12:52:56.628131       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:52:56.628453       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:53:00 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:00.598122     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7z87\" (UniqueName: \"kubernetes.io/projected/ffe96798-7c36-44e9-9226-0fea7d9cba29-kube-api-access-w7z87\") pod \"kubernetes-dashboard-855c9754f9-bv5k2\" (UID: \"ffe96798-7c36-44e9-9226-0fea7d9cba29\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bv5k2"
	Oct 19 12:53:00 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:00.598153     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfvmm\" (UniqueName: \"kubernetes.io/projected/be6e9801-108a-4894-958e-283c60be7560-kube-api-access-pfvmm\") pod \"dashboard-metrics-scraper-6ffb444bf9-668bp\" (UID: \"be6e9801-108a-4894-958e-283c60be7560\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp"
	Oct 19 12:53:03 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:03.994230     716 scope.go:117] "RemoveContainer" containerID="a8d742844e3efb843d85972ded1da36c6bbb0cca4b7c2fc0ed2d1736642130f5"
	Oct 19 12:53:04 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:04.385388     716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 12:53:05 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:05.000710     716 scope.go:117] "RemoveContainer" containerID="a8d742844e3efb843d85972ded1da36c6bbb0cca4b7c2fc0ed2d1736642130f5"
	Oct 19 12:53:05 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:05.001268     716 scope.go:117] "RemoveContainer" containerID="9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4"
	Oct 19 12:53:05 default-k8s-diff-port-999693 kubelet[716]: E1019 12:53:05.001583     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-668bp_kubernetes-dashboard(be6e9801-108a-4894-958e-283c60be7560)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp" podUID="be6e9801-108a-4894-958e-283c60be7560"
	Oct 19 12:53:06 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:06.005277     716 scope.go:117] "RemoveContainer" containerID="9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4"
	Oct 19 12:53:06 default-k8s-diff-port-999693 kubelet[716]: E1019 12:53:06.005903     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-668bp_kubernetes-dashboard(be6e9801-108a-4894-958e-283c60be7560)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp" podUID="be6e9801-108a-4894-958e-283c60be7560"
	Oct 19 12:53:07 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:07.008337     716 scope.go:117] "RemoveContainer" containerID="9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4"
	Oct 19 12:53:07 default-k8s-diff-port-999693 kubelet[716]: E1019 12:53:07.008589     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-668bp_kubernetes-dashboard(be6e9801-108a-4894-958e-283c60be7560)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp" podUID="be6e9801-108a-4894-958e-283c60be7560"
	Oct 19 12:53:08 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:08.023418     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-bv5k2" podStartSLOduration=1.371320181 podStartE2EDuration="8.023393643s" podCreationTimestamp="2025-10-19 12:53:00 +0000 UTC" firstStartedPulling="2025-10-19 12:53:00.834327714 +0000 UTC m=+7.006794712" lastFinishedPulling="2025-10-19 12:53:07.48640116 +0000 UTC m=+13.658868174" observedRunningTime="2025-10-19 12:53:08.023207883 +0000 UTC m=+14.195674898" watchObservedRunningTime="2025-10-19 12:53:08.023393643 +0000 UTC m=+14.195860658"
	Oct 19 12:53:21 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:21.931857     716 scope.go:117] "RemoveContainer" containerID="9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4"
	Oct 19 12:53:22 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:22.051119     716 scope.go:117] "RemoveContainer" containerID="9c6ecd04d755af0f99c11bcd67e3ebd536a4f152bf6791e0037db6cc129fc8f4"
	Oct 19 12:53:22 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:22.051332     716 scope.go:117] "RemoveContainer" containerID="fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498"
	Oct 19 12:53:22 default-k8s-diff-port-999693 kubelet[716]: E1019 12:53:22.051546     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-668bp_kubernetes-dashboard(be6e9801-108a-4894-958e-283c60be7560)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp" podUID="be6e9801-108a-4894-958e-283c60be7560"
	Oct 19 12:53:25 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:25.760817     716 scope.go:117] "RemoveContainer" containerID="fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498"
	Oct 19 12:53:25 default-k8s-diff-port-999693 kubelet[716]: E1019 12:53:25.761044     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-668bp_kubernetes-dashboard(be6e9801-108a-4894-958e-283c60be7560)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp" podUID="be6e9801-108a-4894-958e-283c60be7560"
	Oct 19 12:53:28 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:28.070963     716 scope.go:117] "RemoveContainer" containerID="81423f1b546a04c25757a47a152f0daa3ca35543016899d310a2e1bdf2986375"
	Oct 19 12:53:38 default-k8s-diff-port-999693 kubelet[716]: I1019 12:53:38.931623     716 scope.go:117] "RemoveContainer" containerID="fdc334ceb1fdf443c914960ec607ffd6394bcdeb6ef5582290175450e8359498"
	Oct 19 12:53:38 default-k8s-diff-port-999693 kubelet[716]: E1019 12:53:38.931870     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-668bp_kubernetes-dashboard(be6e9801-108a-4894-958e-283c60be7560)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-668bp" podUID="be6e9801-108a-4894-958e-283c60be7560"
	Oct 19 12:53:48 default-k8s-diff-port-999693 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 12:53:48 default-k8s-diff-port-999693 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 12:53:48 default-k8s-diff-port-999693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 19 12:53:48 default-k8s-diff-port-999693 systemd[1]: kubelet.service: Consumed 1.710s CPU time.
	
	
	==> kubernetes-dashboard [1cd8bcfb5c309260593239de52b34e22550c164bb9abd93b219cb9e1a5bf0fbe] <==
	2025/10/19 12:53:07 Starting overwatch
	2025/10/19 12:53:07 Using namespace: kubernetes-dashboard
	2025/10/19 12:53:07 Using in-cluster config to connect to apiserver
	2025/10/19 12:53:07 Using secret token for csrf signing
	2025/10/19 12:53:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/19 12:53:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/19 12:53:07 Successful initial request to the apiserver, version: v1.34.1
	2025/10/19 12:53:07 Generating JWE encryption key
	2025/10/19 12:53:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/19 12:53:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/19 12:53:07 Initializing JWE encryption key from synchronized object
	2025/10/19 12:53:07 Creating in-cluster Sidecar client
	2025/10/19 12:53:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/19 12:53:07 Serving insecurely on HTTP port: 9090
	2025/10/19 12:53:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3958f67da799089d5c30b63ec7f53c85ee3a7cdf455396407624ee16e946961f] <==
	I1019 12:53:28.133221       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1019 12:53:28.141366       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1019 12:53:28.141455       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1019 12:53:28.143771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:31.599561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:35.861371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:39.459339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:42.512960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:45.535755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:45.540108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:53:45.540288       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1019 12:53:45.540461       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-999693_363feff1-c085-4b74-b573-caf2ed60c042!
	I1019 12:53:45.540469       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"94ffe2ba-d9f2-4be7-afb9-f7f386e949ce", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-999693_363feff1-c085-4b74-b573-caf2ed60c042 became leader
	W1019 12:53:45.542738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:45.547448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1019 12:53:45.641509       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-999693_363feff1-c085-4b74-b573-caf2ed60c042!
	W1019 12:53:47.550707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:47.555409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:49.559665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:49.564856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:51.568730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 12:53:51.573354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [81423f1b546a04c25757a47a152f0daa3ca35543016899d310a2e1bdf2986375] <==
	I1019 12:52:57.340214       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1019 12:53:27.348534       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693: exit status 2 (313.271627ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-999693 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-190708 --alsologtostderr -v=1
E1019 12:54:08.590096  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/kindnet-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-190708 --alsologtostderr -v=1: exit status 80 (2.246634329s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-190708 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:54:07.400458  683373 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:54:07.400567  683373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:54:07.400575  683373 out.go:374] Setting ErrFile to fd 2...
	I1019 12:54:07.400579  683373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:54:07.400755  683373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:54:07.400975  683373 out.go:368] Setting JSON to false
	I1019 12:54:07.401019  683373 mustload.go:65] Loading cluster: newest-cni-190708
	I1019 12:54:07.401341  683373 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:54:07.401748  683373 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:54:07.421074  683373 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:54:07.421377  683373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:54:07.480530  683373 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-19 12:54:07.470204696 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:54:07.481152  683373 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-190708 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1019 12:54:07.483590  683373 out.go:179] * Pausing node newest-cni-190708 ... 
	I1019 12:54:07.484644  683373 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:54:07.484907  683373 ssh_runner.go:195] Run: systemctl --version
	I1019 12:54:07.484963  683373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:07.502796  683373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:07.598109  683373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:54:07.609963  683373 pause.go:52] kubelet running: true
	I1019 12:54:07.610047  683373 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:54:07.754068  683373 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:54:07.754168  683373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:54:07.820624  683373 cri.go:89] found id: "65e4d07efdb1afbf1f081e63b94484253313769ab6dd517487bb2b509bfd0ce5"
	I1019 12:54:07.820652  683373 cri.go:89] found id: "4e251df0a1d9dd255f0f5618a28848aefc6ca1c783d044e3bad0f7982f108c5d"
	I1019 12:54:07.820658  683373 cri.go:89] found id: "4ef96fcd55a50ba226b906beb6d33a69d08d927c4bcaf88048d22b93a8921426"
	I1019 12:54:07.820663  683373 cri.go:89] found id: "f130d56dff95e348873fd450dec53a547f2bc78e4e6bc98ac4c2129ea4e39792"
	I1019 12:54:07.820667  683373 cri.go:89] found id: "3de424704aaddbaac7b2e42e4afd14146505b46ff0d69e09c79df496bc1abdd1"
	I1019 12:54:07.820672  683373 cri.go:89] found id: "4b4056b243fccafdf386a0031d8daadb87ab333c9c1633214d96ae3559fe3343"
	I1019 12:54:07.820676  683373 cri.go:89] found id: ""
	I1019 12:54:07.820723  683373 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:54:07.832821  683373 retry.go:31] will retry after 210.648094ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:54:07Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:54:08.044322  683373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:54:08.057069  683373 pause.go:52] kubelet running: false
	I1019 12:54:08.057138  683373 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:54:08.167786  683373 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:54:08.167883  683373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:54:08.232577  683373 cri.go:89] found id: "65e4d07efdb1afbf1f081e63b94484253313769ab6dd517487bb2b509bfd0ce5"
	I1019 12:54:08.232604  683373 cri.go:89] found id: "4e251df0a1d9dd255f0f5618a28848aefc6ca1c783d044e3bad0f7982f108c5d"
	I1019 12:54:08.232610  683373 cri.go:89] found id: "4ef96fcd55a50ba226b906beb6d33a69d08d927c4bcaf88048d22b93a8921426"
	I1019 12:54:08.232614  683373 cri.go:89] found id: "f130d56dff95e348873fd450dec53a547f2bc78e4e6bc98ac4c2129ea4e39792"
	I1019 12:54:08.232618  683373 cri.go:89] found id: "3de424704aaddbaac7b2e42e4afd14146505b46ff0d69e09c79df496bc1abdd1"
	I1019 12:54:08.232621  683373 cri.go:89] found id: "4b4056b243fccafdf386a0031d8daadb87ab333c9c1633214d96ae3559fe3343"
	I1019 12:54:08.232625  683373 cri.go:89] found id: ""
	I1019 12:54:08.232674  683373 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:54:08.244271  683373 retry.go:31] will retry after 448.576154ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:54:08Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:54:08.693613  683373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:54:08.706317  683373 pause.go:52] kubelet running: false
	I1019 12:54:08.706375  683373 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:54:08.818223  683373 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:54:08.818320  683373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:54:08.886173  683373 cri.go:89] found id: "65e4d07efdb1afbf1f081e63b94484253313769ab6dd517487bb2b509bfd0ce5"
	I1019 12:54:08.886201  683373 cri.go:89] found id: "4e251df0a1d9dd255f0f5618a28848aefc6ca1c783d044e3bad0f7982f108c5d"
	I1019 12:54:08.886207  683373 cri.go:89] found id: "4ef96fcd55a50ba226b906beb6d33a69d08d927c4bcaf88048d22b93a8921426"
	I1019 12:54:08.886212  683373 cri.go:89] found id: "f130d56dff95e348873fd450dec53a547f2bc78e4e6bc98ac4c2129ea4e39792"
	I1019 12:54:08.886215  683373 cri.go:89] found id: "3de424704aaddbaac7b2e42e4afd14146505b46ff0d69e09c79df496bc1abdd1"
	I1019 12:54:08.886218  683373 cri.go:89] found id: "4b4056b243fccafdf386a0031d8daadb87ab333c9c1633214d96ae3559fe3343"
	I1019 12:54:08.886221  683373 cri.go:89] found id: ""
	I1019 12:54:08.886270  683373 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:54:08.897957  683373 retry.go:31] will retry after 484.59344ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:54:08Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:54:09.383733  683373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:54:09.396487  683373 pause.go:52] kubelet running: false
	I1019 12:54:09.396549  683373 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1019 12:54:09.508823  683373 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1019 12:54:09.508918  683373 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1019 12:54:09.576046  683373 cri.go:89] found id: "65e4d07efdb1afbf1f081e63b94484253313769ab6dd517487bb2b509bfd0ce5"
	I1019 12:54:09.576067  683373 cri.go:89] found id: "4e251df0a1d9dd255f0f5618a28848aefc6ca1c783d044e3bad0f7982f108c5d"
	I1019 12:54:09.576071  683373 cri.go:89] found id: "4ef96fcd55a50ba226b906beb6d33a69d08d927c4bcaf88048d22b93a8921426"
	I1019 12:54:09.576073  683373 cri.go:89] found id: "f130d56dff95e348873fd450dec53a547f2bc78e4e6bc98ac4c2129ea4e39792"
	I1019 12:54:09.576077  683373 cri.go:89] found id: "3de424704aaddbaac7b2e42e4afd14146505b46ff0d69e09c79df496bc1abdd1"
	I1019 12:54:09.576080  683373 cri.go:89] found id: "4b4056b243fccafdf386a0031d8daadb87ab333c9c1633214d96ae3559fe3343"
	I1019 12:54:09.576083  683373 cri.go:89] found id: ""
	I1019 12:54:09.576130  683373 ssh_runner.go:195] Run: sudo runc list -f json
	I1019 12:54:09.589732  683373 out.go:203] 
	W1019 12:54:09.591063  683373 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:54:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:54:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1019 12:54:09.591085  683373 out.go:285] * 
	* 
	W1019 12:54:09.595814  683373 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1019 12:54:09.597224  683373 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-190708 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-190708
helpers_test.go:243: (dbg) docker inspect newest-cni-190708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e",
	        "Created": "2025-10-19T12:53:16.899890869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 681591,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:53:57.004485226Z",
	            "FinishedAt": "2025-10-19T12:53:56.194573744Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e/hostname",
	        "HostsPath": "/var/lib/docker/containers/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e/hosts",
	        "LogPath": "/var/lib/docker/containers/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e-json.log",
	        "Name": "/newest-cni-190708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-190708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-190708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e",
	                "LowerDir": "/var/lib/docker/overlay2/653c8d1502eed2b75e202821e542d41034ff9a79f47523da97128e4604cb9c97-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/653c8d1502eed2b75e202821e542d41034ff9a79f47523da97128e4604cb9c97/merged",
	                "UpperDir": "/var/lib/docker/overlay2/653c8d1502eed2b75e202821e542d41034ff9a79f47523da97128e4604cb9c97/diff",
	                "WorkDir": "/var/lib/docker/overlay2/653c8d1502eed2b75e202821e542d41034ff9a79f47523da97128e4604cb9c97/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-190708",
	                "Source": "/var/lib/docker/volumes/newest-cni-190708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-190708",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-190708",
	                "name.minikube.sigs.k8s.io": "newest-cni-190708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "17458ca334845ef149f415646f1b3c3f72ede8a5c987ddfc2b246ad1cca7c212",
	            "SandboxKey": "/var/run/docker/netns/17458ca33484",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-190708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:99:01:b3:8a:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f377a8132f38263e0c4abe3d087c7fa64425e9bfe055ce9e280edbfae9e21983",
	                    "EndpointID": "17ca0641071acea38cecf2b45765ffb0de4ffec4fae926a941cd6864572544f6",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-190708",
	                        "058030ae05d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190708 -n newest-cni-190708
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190708 -n newest-cni-190708: exit status 2 (312.898865ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-190708 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-999693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ old-k8s-version-577062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p old-k8s-version-577062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ no-preload-561408 image list --format=json                                                                                                                                                                                                    │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p no-preload-561408 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ start   │ -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-190708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ stop    │ -p newest-cni-190708 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ embed-certs-123864 image list --format=json                                                                                                                                                                                                   │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p embed-certs-123864 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p embed-certs-123864                                                                                                                                                                                                                         │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ default-k8s-diff-port-999693 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p default-k8s-diff-port-999693 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p embed-certs-123864                                                                                                                                                                                                                         │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p default-k8s-diff-port-999693                                                                                                                                                                                                               │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p default-k8s-diff-port-999693                                                                                                                                                                                                               │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-190708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ start   │ -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:54 UTC │
	│ image   │ newest-cni-190708 image list --format=json                                                                                                                                                                                                    │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:54 UTC │ 19 Oct 25 12:54 UTC │
	│ pause   │ -p newest-cni-190708 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:53:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:53:56.775608  681393 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:53:56.775717  681393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:56.775721  681393 out.go:374] Setting ErrFile to fd 2...
	I1019 12:53:56.775725  681393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:56.775932  681393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:53:56.776391  681393 out.go:368] Setting JSON to false
	I1019 12:53:56.777590  681393 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9385,"bootTime":1760869052,"procs":467,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:53:56.777708  681393 start.go:141] virtualization: kvm guest
	I1019 12:53:56.779545  681393 out.go:179] * [newest-cni-190708] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:53:56.780810  681393 notify.go:220] Checking for updates...
	I1019 12:53:56.780833  681393 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:53:56.782177  681393 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:53:56.783534  681393 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:53:56.784688  681393 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:53:56.785720  681393 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:53:56.786991  681393 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:53:56.788412  681393 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:56.788950  681393 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:53:56.812916  681393 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:53:56.813017  681393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:56.872433  681393 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-19 12:53:56.862141078 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:56.872541  681393 docker.go:318] overlay module found
	I1019 12:53:56.874130  681393 out.go:179] * Using the docker driver based on existing profile
	I1019 12:53:56.875367  681393 start.go:305] selected driver: docker
	I1019 12:53:56.875380  681393 start.go:925] validating driver "docker" against &{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:56.875474  681393 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:53:56.876018  681393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:56.932120  681393 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-19 12:53:56.922313914 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:56.932411  681393 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:53:56.932460  681393 cni.go:84] Creating CNI manager for ""
	I1019 12:53:56.932513  681393 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:56.932548  681393 start.go:349] cluster config:
	{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:56.934449  681393 out.go:179] * Starting "newest-cni-190708" primary control-plane node in "newest-cni-190708" cluster
	I1019 12:53:56.935660  681393 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:53:56.936901  681393 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:53:56.938144  681393 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:56.938193  681393 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:53:56.938216  681393 cache.go:58] Caching tarball of preloaded images
	I1019 12:53:56.938250  681393 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:53:56.938337  681393 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:53:56.938354  681393 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:53:56.938505  681393 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:56.959123  681393 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:53:56.959143  681393 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:53:56.959163  681393 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:53:56.959196  681393 start.go:360] acquireMachinesLock for newest-cni-190708: {Name:mk77ff67117e187a78edba04cd47af082236de6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:53:56.959267  681393 start.go:364] duration metric: took 47.819µs to acquireMachinesLock for "newest-cni-190708"
	I1019 12:53:56.959289  681393 start.go:96] Skipping create...Using existing machine configuration
	I1019 12:53:56.959298  681393 fix.go:54] fixHost starting: 
	I1019 12:53:56.959560  681393 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:56.977861  681393 fix.go:112] recreateIfNeeded on newest-cni-190708: state=Stopped err=<nil>
	W1019 12:53:56.977905  681393 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 12:53:56.979572  681393 out.go:252] * Restarting existing docker container for "newest-cni-190708" ...
	I1019 12:53:56.979657  681393 cli_runner.go:164] Run: docker start newest-cni-190708
	I1019 12:53:57.213228  681393 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:57.231903  681393 kic.go:430] container "newest-cni-190708" state is running.
	I1019 12:53:57.232257  681393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:57.249491  681393 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:57.249703  681393 machine.go:93] provisionDockerMachine start ...
	I1019 12:53:57.249775  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:57.268057  681393 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:57.268291  681393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1019 12:53:57.268303  681393 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:53:57.268868  681393 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48476->127.0.0.1:33505: read: connection reset by peer
	I1019 12:54:00.401466  681393 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:54:00.401494  681393 ubuntu.go:182] provisioning hostname "newest-cni-190708"
	I1019 12:54:00.401560  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:00.419699  681393 main.go:141] libmachine: Using SSH client type: native
	I1019 12:54:00.419917  681393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1019 12:54:00.419935  681393 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-190708 && echo "newest-cni-190708" | sudo tee /etc/hostname
	I1019 12:54:00.560830  681393 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:54:00.560907  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:00.579137  681393 main.go:141] libmachine: Using SSH client type: native
	I1019 12:54:00.579368  681393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1019 12:54:00.579386  681393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-190708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-190708/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-190708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:54:00.710049  681393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:54:00.710090  681393 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:54:00.710118  681393 ubuntu.go:190] setting up certificates
	I1019 12:54:00.710138  681393 provision.go:84] configureAuth start
	I1019 12:54:00.710193  681393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:54:00.728520  681393 provision.go:143] copyHostCerts
	I1019 12:54:00.728576  681393 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:54:00.728592  681393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:54:00.728669  681393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:54:00.728778  681393 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:54:00.728791  681393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:54:00.728820  681393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:54:00.728884  681393 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:54:00.728892  681393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:54:00.728914  681393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:54:00.728965  681393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.newest-cni-190708 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-190708]
	I1019 12:54:00.840750  681393 provision.go:177] copyRemoteCerts
	I1019 12:54:00.840814  681393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:54:00.840860  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:00.858722  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:00.953617  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:54:00.970722  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:54:00.987192  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 12:54:01.004703  681393 provision.go:87] duration metric: took 294.550831ms to configureAuth
	I1019 12:54:01.004731  681393 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:54:01.004897  681393 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:54:01.004996  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:01.022527  681393 main.go:141] libmachine: Using SSH client type: native
	I1019 12:54:01.022779  681393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1019 12:54:01.022801  681393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:54:01.274564  681393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:54:01.274587  681393 machine.go:96] duration metric: took 4.024870686s to provisionDockerMachine
	I1019 12:54:01.274599  681393 start.go:293] postStartSetup for "newest-cni-190708" (driver="docker")
	I1019 12:54:01.274610  681393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:54:01.274672  681393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:54:01.274722  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:01.293127  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:01.388852  681393 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:54:01.392349  681393 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:54:01.392383  681393 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:54:01.392396  681393 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:54:01.392460  681393 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:54:01.392539  681393 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:54:01.392635  681393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:54:01.400101  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:54:01.416948  681393 start.go:296] duration metric: took 142.318322ms for postStartSetup
	I1019 12:54:01.417033  681393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:54:01.417072  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:01.435357  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:01.527595  681393 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:54:01.532162  681393 fix.go:56] duration metric: took 4.572855524s for fixHost
	I1019 12:54:01.532184  681393 start.go:83] releasing machines lock for "newest-cni-190708", held for 4.572906009s
	I1019 12:54:01.532238  681393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:54:01.550148  681393 ssh_runner.go:195] Run: cat /version.json
	I1019 12:54:01.550194  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:01.550266  681393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:54:01.550354  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:01.570055  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:01.570083  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:01.713249  681393 ssh_runner.go:195] Run: systemctl --version
	I1019 12:54:01.719964  681393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:54:01.754614  681393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:54:01.759320  681393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:54:01.759393  681393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:54:01.767169  681393 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:54:01.767197  681393 start.go:495] detecting cgroup driver to use...
	I1019 12:54:01.767227  681393 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:54:01.767268  681393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:54:01.781037  681393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:54:01.792481  681393 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:54:01.792537  681393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:54:01.806029  681393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:54:01.817933  681393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:54:01.894876  681393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:54:01.974621  681393 docker.go:234] disabling docker service ...
	I1019 12:54:01.974693  681393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:54:01.988467  681393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:54:02.000269  681393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:54:02.079762  681393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:54:02.159767  681393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:54:02.171908  681393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:54:02.185186  681393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:54:02.185253  681393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.193853  681393 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:54:02.193918  681393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.202248  681393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.210631  681393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.219032  681393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:54:02.226960  681393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.235483  681393 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.243649  681393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.251981  681393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:54:02.259097  681393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:54:02.266239  681393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:54:02.346133  681393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:54:02.453130  681393 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:54:02.453195  681393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:54:02.457135  681393 start.go:563] Will wait 60s for crictl version
	I1019 12:54:02.457194  681393 ssh_runner.go:195] Run: which crictl
	I1019 12:54:02.460691  681393 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:54:02.484198  681393 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:54:02.484286  681393 ssh_runner.go:195] Run: crio --version
	I1019 12:54:02.512250  681393 ssh_runner.go:195] Run: crio --version
	I1019 12:54:02.541370  681393 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:54:02.542404  681393 cli_runner.go:164] Run: docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:54:02.559907  681393 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1019 12:54:02.564090  681393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:54:02.575776  681393 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 12:54:02.576712  681393 kubeadm.go:883] updating cluster {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:54:02.576832  681393 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:54:02.576895  681393 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:54:02.609310  681393 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:54:02.609334  681393 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:54:02.609391  681393 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:54:02.635192  681393 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:54:02.635214  681393 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:54:02.635223  681393 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1019 12:54:02.635356  681393 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-190708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:54:02.635502  681393 ssh_runner.go:195] Run: crio config
	I1019 12:54:02.681745  681393 cni.go:84] Creating CNI manager for ""
	I1019 12:54:02.681766  681393 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:54:02.681784  681393 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 12:54:02.681812  681393 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190708 NodeName:newest-cni-190708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:54:02.681979  681393 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-190708"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:54:02.682055  681393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:54:02.690198  681393 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:54:02.690257  681393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:54:02.697792  681393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 12:54:02.710462  681393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:54:02.722342  681393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1019 12:54:02.734823  681393 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:54:02.738286  681393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:54:02.747667  681393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:54:02.827413  681393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:54:02.848669  681393 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708 for IP: 192.168.94.2
	I1019 12:54:02.848695  681393 certs.go:195] generating shared ca certs ...
	I1019 12:54:02.848716  681393 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:54:02.848893  681393 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:54:02.848941  681393 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:54:02.848957  681393 certs.go:257] generating profile certs ...
	I1019 12:54:02.849087  681393 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key
	I1019 12:54:02.849173  681393 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd
	I1019 12:54:02.849226  681393 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key
	I1019 12:54:02.849370  681393 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:54:02.849411  681393 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:54:02.849441  681393 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:54:02.849476  681393 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:54:02.849507  681393 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:54:02.849535  681393 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:54:02.849611  681393 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:54:02.850184  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:54:02.868123  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:54:02.885834  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:54:02.905665  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:54:02.929969  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 12:54:02.948044  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:54:02.964295  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:54:02.980557  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:54:02.996996  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:54:03.013624  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:54:03.029910  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:54:03.047274  681393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:54:03.059300  681393 ssh_runner.go:195] Run: openssl version
	I1019 12:54:03.065270  681393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:54:03.073092  681393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:54:03.076663  681393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:54:03.076721  681393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:54:03.110329  681393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:54:03.118839  681393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:54:03.126968  681393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:54:03.130523  681393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:54:03.130574  681393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:54:03.163850  681393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:54:03.171916  681393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:54:03.179859  681393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:54:03.183412  681393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:54:03.183471  681393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:54:03.217980  681393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:54:03.226163  681393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:54:03.230201  681393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:54:03.264575  681393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:54:03.298526  681393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:54:03.332667  681393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:54:03.380870  681393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:54:03.426214  681393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:54:03.478042  681393 kubeadm.go:400] StartCluster: {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:54:03.478167  681393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:54:03.478258  681393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:54:03.518881  681393 cri.go:89] found id: "4ef96fcd55a50ba226b906beb6d33a69d08d927c4bcaf88048d22b93a8921426"
	I1019 12:54:03.518926  681393 cri.go:89] found id: "f130d56dff95e348873fd450dec53a547f2bc78e4e6bc98ac4c2129ea4e39792"
	I1019 12:54:03.518932  681393 cri.go:89] found id: "3de424704aaddbaac7b2e42e4afd14146505b46ff0d69e09c79df496bc1abdd1"
	I1019 12:54:03.518936  681393 cri.go:89] found id: "4b4056b243fccafdf386a0031d8daadb87ab333c9c1633214d96ae3559fe3343"
	I1019 12:54:03.518940  681393 cri.go:89] found id: ""
	I1019 12:54:03.518989  681393 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:54:03.532091  681393 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:54:03Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:54:03.532163  681393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:54:03.540253  681393 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:54:03.540276  681393 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:54:03.540323  681393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:54:03.548044  681393 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:54:03.548555  681393 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-190708" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:54:03.548684  681393 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-351705/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-190708" cluster setting kubeconfig missing "newest-cni-190708" context setting]
	I1019 12:54:03.549041  681393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:54:03.550661  681393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:54:03.558367  681393 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1019 12:54:03.558398  681393 kubeadm.go:601] duration metric: took 18.115394ms to restartPrimaryControlPlane
	I1019 12:54:03.558407  681393 kubeadm.go:402] duration metric: took 80.382599ms to StartCluster
	I1019 12:54:03.558455  681393 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:54:03.558521  681393 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:54:03.559220  681393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:54:03.559503  681393 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:54:03.559608  681393 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:54:03.559722  681393 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-190708"
	I1019 12:54:03.559733  681393 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:54:03.559750  681393 addons.go:69] Setting dashboard=true in profile "newest-cni-190708"
	I1019 12:54:03.559768  681393 addons.go:238] Setting addon dashboard=true in "newest-cni-190708"
	I1019 12:54:03.559741  681393 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-190708"
	W1019 12:54:03.559775  681393 addons.go:247] addon dashboard should already be in state true
	I1019 12:54:03.559775  681393 addons.go:69] Setting default-storageclass=true in profile "newest-cni-190708"
	W1019 12:54:03.559785  681393 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:54:03.559806  681393 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-190708"
	I1019 12:54:03.559809  681393 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:54:03.559810  681393 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:54:03.560110  681393 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:54:03.560221  681393 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:54:03.560281  681393 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:54:03.561718  681393 out.go:179] * Verifying Kubernetes components...
	I1019 12:54:03.563115  681393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:54:03.584973  681393 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:54:03.586142  681393 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 12:54:03.586190  681393 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:54:03.586206  681393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:54:03.586261  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:03.586835  681393 addons.go:238] Setting addon default-storageclass=true in "newest-cni-190708"
	W1019 12:54:03.586855  681393 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:54:03.586898  681393 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:54:03.587535  681393 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:54:03.588450  681393 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 12:54:03.589524  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 12:54:03.589540  681393 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:54:03.589602  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:03.616491  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:03.619375  681393 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:54:03.619401  681393 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:54:03.619476  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:03.621166  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:03.643831  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:03.702635  681393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:54:03.715530  681393 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:54:03.715609  681393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:54:03.728115  681393 api_server.go:72] duration metric: took 168.575992ms to wait for apiserver process to appear ...
	I1019 12:54:03.728157  681393 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:54:03.728179  681393 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:54:03.732211  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:54:03.732233  681393 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:54:03.735855  681393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:54:03.746173  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:54:03.746195  681393 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:54:03.752527  681393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:54:03.760397  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:54:03.760453  681393 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:54:03.775150  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:54:03.775175  681393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:54:03.789276  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:54:03.789301  681393 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:54:03.808071  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:54:03.808127  681393 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:54:03.823049  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:54:03.823078  681393 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:54:03.835514  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:54:03.835565  681393 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:54:03.847751  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:54:03.847773  681393 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:54:03.860143  681393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:54:05.083631  681393 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 12:54:05.083660  681393 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 12:54:05.083683  681393 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:54:05.089539  681393 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 12:54:05.089566  681393 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 12:54:05.228580  681393 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:54:05.235398  681393 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:54:05.235442  681393 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:54:05.580157  681393 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.844265474s)
	I1019 12:54:05.580202  681393 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.827651061s)
	I1019 12:54:05.580324  681393 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.720148239s)
	I1019 12:54:05.582047  681393 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-190708 addons enable metrics-server
	
	I1019 12:54:05.590505  681393 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 12:54:05.591606  681393 addons.go:514] duration metric: took 2.032013086s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 12:54:05.728556  681393 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:54:05.732628  681393 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:54:05.732651  681393 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:54:06.228929  681393 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:54:06.234061  681393 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:54:06.234091  681393 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:54:06.728500  681393 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:54:06.732762  681393 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1019 12:54:06.733733  681393 api_server.go:141] control plane version: v1.34.1
	I1019 12:54:06.733757  681393 api_server.go:131] duration metric: took 3.005593435s to wait for apiserver health ...
	I1019 12:54:06.733769  681393 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:54:06.737374  681393 system_pods.go:59] 8 kube-system pods found
	I1019 12:54:06.737409  681393 system_pods.go:61] "coredns-66bc5c9577-kp55x" [9a472ee8-8fcb-410c-92d0-6f82b4bacad7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:54:06.737437  681393 system_pods.go:61] "etcd-newest-cni-190708" [2105393f-0676-49e0-aa1c-5efd62f5148c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:54:06.737450  681393 system_pods.go:61] "kindnet-8bb9r" [eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:54:06.737459  681393 system_pods.go:61] "kube-apiserver-newest-cni-190708" [6f2a10a0-1e97-46ef-831c-c648f1ead906] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:54:06.737472  681393 system_pods.go:61] "kube-controller-manager-newest-cni-190708" [2fd054d9-c518-4415-8279-b247bb13d91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:54:06.737487  681393 system_pods.go:61] "kube-proxy-v7xgj" [9620c4c3-352a-4d93-8d43-f7a06fcd3374] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:54:06.737498  681393 system_pods.go:61] "kube-scheduler-newest-cni-190708" [8d1175ee-58dc-471d-856b-87d65a82c0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:54:06.737502  681393 system_pods.go:61] "storage-provisioner" [d9659c6a-9cea-4234-aaf7-baafb55fcf58] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:54:06.737509  681393 system_pods.go:74] duration metric: took 3.731671ms to wait for pod list to return data ...
	I1019 12:54:06.737519  681393 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:54:06.739826  681393 default_sa.go:45] found service account: "default"
	I1019 12:54:06.739846  681393 default_sa.go:55] duration metric: took 2.320798ms for default service account to be created ...
	I1019 12:54:06.739856  681393 kubeadm.go:586] duration metric: took 3.180324861s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:54:06.739884  681393 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:54:06.742226  681393 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:54:06.742247  681393 node_conditions.go:123] node cpu capacity is 8
	I1019 12:54:06.742257  681393 node_conditions.go:105] duration metric: took 2.365715ms to run NodePressure ...
	I1019 12:54:06.742271  681393 start.go:241] waiting for startup goroutines ...
	I1019 12:54:06.742283  681393 start.go:246] waiting for cluster config update ...
	I1019 12:54:06.742300  681393 start.go:255] writing updated cluster config ...
	I1019 12:54:06.742610  681393 ssh_runner.go:195] Run: rm -f paused
	I1019 12:54:06.792772  681393 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:54:06.794287  681393 out.go:179] * Done! kubectl is now configured to use "newest-cni-190708" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.218739231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.222308953Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e8950363-3dfb-4fad-94d3-869631d6c8d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.223098736Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=81ab7e9d-7a6a-4196-a1fa-02481f77de5c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.224367164Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.224846497Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.225358928Z" level=info msg="Ran pod sandbox ccaaa82cb15396e216269b1702b7caa21376cf945d66507f0a94c38b4e7fdd03 with infra container: kube-system/kube-proxy-v7xgj/POD" id=e8950363-3dfb-4fad-94d3-869631d6c8d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.22568181Z" level=info msg="Ran pod sandbox 7bff8ebb6c54ecb35cc91bdfe197c8bb5cc395fc8fa5632dd352223b87fbc571 with infra container: kube-system/kindnet-8bb9r/POD" id=81ab7e9d-7a6a-4196-a1fa-02481f77de5c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.22638993Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=822922b9-5693-47a3-9583-bd0b9d16f0af name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.226695713Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=edf40a55-0593-46cb-aa8b-0e5b45ca3e4b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.227338661Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=87390921-c9e6-4d5f-85e6-3f03f6376ebd name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.227605615Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=249db88e-6ac6-4269-a770-0cd88d3480d3 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.228462432Z" level=info msg="Creating container: kube-system/kube-proxy-v7xgj/kube-proxy" id=97720fc4-c225-4595-9b34-bda657d022cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.22859495Z" level=info msg="Creating container: kube-system/kindnet-8bb9r/kindnet-cni" id=17d54139-2e24-4638-a863-221511b00834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.228717336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.22878347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.233445363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.234121224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.234283704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.234855787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.259774414Z" level=info msg="Created container 65e4d07efdb1afbf1f081e63b94484253313769ab6dd517487bb2b509bfd0ce5: kube-system/kindnet-8bb9r/kindnet-cni" id=17d54139-2e24-4638-a863-221511b00834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.260401087Z" level=info msg="Starting container: 65e4d07efdb1afbf1f081e63b94484253313769ab6dd517487bb2b509bfd0ce5" id=cef45367-8d81-48a4-8cde-958273c5beb2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.262146106Z" level=info msg="Started container" PID=1041 containerID=65e4d07efdb1afbf1f081e63b94484253313769ab6dd517487bb2b509bfd0ce5 description=kube-system/kindnet-8bb9r/kindnet-cni id=cef45367-8d81-48a4-8cde-958273c5beb2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7bff8ebb6c54ecb35cc91bdfe197c8bb5cc395fc8fa5632dd352223b87fbc571
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.264265455Z" level=info msg="Created container 4e251df0a1d9dd255f0f5618a28848aefc6ca1c783d044e3bad0f7982f108c5d: kube-system/kube-proxy-v7xgj/kube-proxy" id=97720fc4-c225-4595-9b34-bda657d022cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.264897563Z" level=info msg="Starting container: 4e251df0a1d9dd255f0f5618a28848aefc6ca1c783d044e3bad0f7982f108c5d" id=4c3d7009-a6bd-4c65-a2be-9293708adb27 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.267584827Z" level=info msg="Started container" PID=1042 containerID=4e251df0a1d9dd255f0f5618a28848aefc6ca1c783d044e3bad0f7982f108c5d description=kube-system/kube-proxy-v7xgj/kube-proxy id=4c3d7009-a6bd-4c65-a2be-9293708adb27 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ccaaa82cb15396e216269b1702b7caa21376cf945d66507f0a94c38b4e7fdd03
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	65e4d07efdb1a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   7bff8ebb6c54e       kindnet-8bb9r                               kube-system
	4e251df0a1d9d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   ccaaa82cb1539       kube-proxy-v7xgj                            kube-system
	4ef96fcd55a50       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   54e495ddaf984       kube-apiserver-newest-cni-190708            kube-system
	f130d56dff95e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   bdf001e612967       kube-scheduler-newest-cni-190708            kube-system
	3de424704aadd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   de292fed6fb10       kube-controller-manager-newest-cni-190708   kube-system
	4b4056b243fcc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   70641c42922dc       etcd-newest-cni-190708                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-190708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-190708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=newest-cni-190708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_53_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:53:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-190708
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:54:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:54:05 +0000   Sun, 19 Oct 2025 12:53:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:54:05 +0000   Sun, 19 Oct 2025 12:53:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:54:05 +0000   Sun, 19 Oct 2025 12:53:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 19 Oct 2025 12:54:05 +0000   Sun, 19 Oct 2025 12:53:26 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-190708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                4573dffe-685a-448f-8daf-99deda56b058
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-190708                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-8bb9r                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      34s
	  kube-system                 kube-apiserver-newest-cni-190708             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-newest-cni-190708    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-v7xgj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-newest-cni-190708             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 33s              kube-proxy       
	  Normal  Starting                 4s               kube-proxy       
	  Normal  Starting                 40s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s              kubelet          Node newest-cni-190708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s              kubelet          Node newest-cni-190708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s              kubelet          Node newest-cni-190708 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s              node-controller  Node newest-cni-190708 event: Registered Node newest-cni-190708 in Controller
	  Normal  Starting                 8s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x2 over 8s)  kubelet          Node newest-cni-190708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x2 over 8s)  kubelet          Node newest-cni-190708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x2 over 8s)  kubelet          Node newest-cni-190708 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s               node-controller  Node newest-cni-190708 event: Registered Node newest-cni-190708 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [4b4056b243fccafdf386a0031d8daadb87ab333c9c1633214d96ae3559fe3343] <==
	{"level":"warn","ts":"2025-10-19T12:54:04.504629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.512551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.518565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.524395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.530291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.536076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.542071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.547867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.554013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.559909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.568448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.574956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.581548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.587711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.593774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.599769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.605682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.612102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.617985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.623916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.630299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.646340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.652100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.657951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.701874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49372","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:54:10 up  2:36,  0 user,  load average: 2.29, 4.11, 2.98
	Linux newest-cni-190708 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [65e4d07efdb1afbf1f081e63b94484253313769ab6dd517487bb2b509bfd0ce5] <==
	I1019 12:54:06.383309       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:54:06.383572       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1019 12:54:06.383720       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:54:06.383738       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:54:06.383767       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:54:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:54:06.582559       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:54:06.582642       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:54:06.582662       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:54:06.678696       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:54:07.079095       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:54:07.079129       1 metrics.go:72] Registering metrics
	I1019 12:54:07.079220       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [4ef96fcd55a50ba226b906beb6d33a69d08d927c4bcaf88048d22b93a8921426] <==
	I1019 12:54:05.163106       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 12:54:05.163163       1 aggregator.go:171] initial CRD sync complete...
	I1019 12:54:05.163173       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 12:54:05.163178       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 12:54:05.163184       1 cache.go:39] Caches are synced for autoregister controller
	I1019 12:54:05.163199       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:54:05.163347       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 12:54:05.163745       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 12:54:05.163766       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 12:54:05.163904       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 12:54:05.169246       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 12:54:05.174987       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 12:54:05.185511       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:54:05.399936       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:54:05.426572       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:54:05.444638       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:54:05.450906       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:54:05.457571       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:54:05.486587       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.29.89"}
	I1019 12:54:05.495640       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.57.240"}
	I1019 12:54:06.066739       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:54:08.922369       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:54:08.972804       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:54:09.022934       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:54:09.022934       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3de424704aaddbaac7b2e42e4afd14146505b46ff0d69e09c79df496bc1abdd1] <==
	I1019 12:54:08.519184       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 12:54:08.519271       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 12:54:08.519297       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 12:54:08.519385       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 12:54:08.519488       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:54:08.519686       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:54:08.519686       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:54:08.519880       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 12:54:08.520549       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 12:54:08.520602       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 12:54:08.521739       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 12:54:08.521754       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:54:08.524971       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 12:54:08.525019       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 12:54:08.525063       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 12:54:08.525074       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 12:54:08.525082       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 12:54:08.525130       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:54:08.525148       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 12:54:08.525156       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 12:54:08.527977       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 12:54:08.530464       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:54:08.535844       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 12:54:08.539090       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 12:54:08.541329       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4e251df0a1d9dd255f0f5618a28848aefc6ca1c783d044e3bad0f7982f108c5d] <==
	I1019 12:54:06.300727       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:54:06.360086       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:54:06.461027       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:54:06.461079       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1019 12:54:06.461183       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:54:06.478901       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:54:06.478977       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:54:06.484233       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:54:06.484631       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:54:06.484658       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:54:06.486437       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:54:06.486534       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:54:06.486570       1 config.go:309] "Starting node config controller"
	I1019 12:54:06.486628       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:54:06.486641       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:54:06.486666       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:54:06.486673       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:54:06.486417       1 config.go:200] "Starting service config controller"
	I1019 12:54:06.486755       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:54:06.587531       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:54:06.587530       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:54:06.587537       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f130d56dff95e348873fd450dec53a547f2bc78e4e6bc98ac4c2129ea4e39792] <==
	I1019 12:54:04.130912       1 serving.go:386] Generated self-signed cert in-memory
	I1019 12:54:05.124287       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 12:54:05.124325       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:54:05.130627       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:54:05.130631       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 12:54:05.130672       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:54:05.130673       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 12:54:05.130713       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:54:05.130721       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:54:05.130894       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:54:05.130931       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 12:54:05.231210       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:54:05.231214       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 12:54:05.231360       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:54:04 newest-cni-190708 kubelet[668]: E1019 12:54:04.949604     668 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-190708\" not found" node="newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.112916     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.183163     668 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.183242     668 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.183272     668 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.184146     668 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: E1019 12:54:05.225308     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-190708\" already exists" pod="kube-system/kube-controller-manager-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.225343     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: E1019 12:54:05.234478     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-190708\" already exists" pod="kube-system/kube-scheduler-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.234520     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: E1019 12:54:05.241334     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-190708\" already exists" pod="kube-system/etcd-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.241370     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: E1019 12:54:05.247144     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-190708\" already exists" pod="kube-system/kube-apiserver-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.910817     668 apiserver.go:52] "Watching apiserver"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.950338     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: E1019 12:54:05.956811     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-190708\" already exists" pod="kube-system/etcd-newest-cni-190708"
	Oct 19 12:54:06 newest-cni-190708 kubelet[668]: I1019 12:54:06.013516     668 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 12:54:06 newest-cni-190708 kubelet[668]: I1019 12:54:06.086915     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d-cni-cfg\") pod \"kindnet-8bb9r\" (UID: \"eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d\") " pod="kube-system/kindnet-8bb9r"
	Oct 19 12:54:06 newest-cni-190708 kubelet[668]: I1019 12:54:06.086984     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9620c4c3-352a-4d93-8d43-f7a06fcd3374-lib-modules\") pod \"kube-proxy-v7xgj\" (UID: \"9620c4c3-352a-4d93-8d43-f7a06fcd3374\") " pod="kube-system/kube-proxy-v7xgj"
	Oct 19 12:54:06 newest-cni-190708 kubelet[668]: I1019 12:54:06.087078     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9620c4c3-352a-4d93-8d43-f7a06fcd3374-xtables-lock\") pod \"kube-proxy-v7xgj\" (UID: \"9620c4c3-352a-4d93-8d43-f7a06fcd3374\") " pod="kube-system/kube-proxy-v7xgj"
	Oct 19 12:54:06 newest-cni-190708 kubelet[668]: I1019 12:54:06.087305     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d-xtables-lock\") pod \"kindnet-8bb9r\" (UID: \"eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d\") " pod="kube-system/kindnet-8bb9r"
	Oct 19 12:54:06 newest-cni-190708 kubelet[668]: I1019 12:54:06.087340     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d-lib-modules\") pod \"kindnet-8bb9r\" (UID: \"eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d\") " pod="kube-system/kindnet-8bb9r"
	Oct 19 12:54:07 newest-cni-190708 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 12:54:07 newest-cni-190708 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 12:54:07 newest-cni-190708 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190708 -n newest-cni-190708
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190708 -n newest-cni-190708: exit status 2 (306.5549ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-190708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-kp55x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vnv2w kubernetes-dashboard-855c9754f9-vsplk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-190708 describe pod coredns-66bc5c9577-kp55x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vnv2w kubernetes-dashboard-855c9754f9-vsplk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-190708 describe pod coredns-66bc5c9577-kp55x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vnv2w kubernetes-dashboard-855c9754f9-vsplk: exit status 1 (59.318899ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-kp55x" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-vnv2w" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vsplk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-190708 describe pod coredns-66bc5c9577-kp55x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vnv2w kubernetes-dashboard-855c9754f9-vsplk: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-190708
helpers_test.go:243: (dbg) docker inspect newest-cni-190708:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e",
	        "Created": "2025-10-19T12:53:16.899890869Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 681591,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-19T12:53:57.004485226Z",
	            "FinishedAt": "2025-10-19T12:53:56.194573744Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e/hostname",
	        "HostsPath": "/var/lib/docker/containers/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e/hosts",
	        "LogPath": "/var/lib/docker/containers/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e/058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e-json.log",
	        "Name": "/newest-cni-190708",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-190708:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-190708",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "058030ae05d2042349424ed348e6dc9d36dede4603128da7ab544fd77e41679e",
	                "LowerDir": "/var/lib/docker/overlay2/653c8d1502eed2b75e202821e542d41034ff9a79f47523da97128e4604cb9c97-init/diff:/var/lib/docker/overlay2/026ae40ea1cc884d4682c7edf40a9959d3f1f6ccb37f720ceca844563d96203e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/653c8d1502eed2b75e202821e542d41034ff9a79f47523da97128e4604cb9c97/merged",
	                "UpperDir": "/var/lib/docker/overlay2/653c8d1502eed2b75e202821e542d41034ff9a79f47523da97128e4604cb9c97/diff",
	                "WorkDir": "/var/lib/docker/overlay2/653c8d1502eed2b75e202821e542d41034ff9a79f47523da97128e4604cb9c97/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-190708",
	                "Source": "/var/lib/docker/volumes/newest-cni-190708/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-190708",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-190708",
	                "name.minikube.sigs.k8s.io": "newest-cni-190708",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "17458ca334845ef149f415646f1b3c3f72ede8a5c987ddfc2b246ad1cca7c212",
	            "SandboxKey": "/var/run/docker/netns/17458ca33484",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-190708": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:99:01:b3:8a:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f377a8132f38263e0c4abe3d087c7fa64425e9bfe055ce9e280edbfae9e21983",
	                    "EndpointID": "17ca0641071acea38cecf2b45765ffb0de4ffec4fae926a941cd6864572544f6",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-190708",
	                        "058030ae05d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190708 -n newest-cni-190708
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190708 -n newest-cni-190708: exit status 2 (305.641729ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-190708 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-999693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:52 UTC │
	│ start   │ -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:52 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ old-k8s-version-577062 image list --format=json                                                                                                                                                                                               │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p old-k8s-version-577062 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ image   │ no-preload-561408 image list --format=json                                                                                                                                                                                                    │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p no-preload-561408 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p old-k8s-version-577062                                                                                                                                                                                                                     │ old-k8s-version-577062       │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ start   │ -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p no-preload-561408                                                                                                                                                                                                                          │ no-preload-561408            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable metrics-server -p newest-cni-190708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ stop    │ -p newest-cni-190708 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ embed-certs-123864 image list --format=json                                                                                                                                                                                                   │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p embed-certs-123864 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p embed-certs-123864                                                                                                                                                                                                                         │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ image   │ default-k8s-diff-port-999693 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ pause   │ -p default-k8s-diff-port-999693 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │                     │
	│ delete  │ -p embed-certs-123864                                                                                                                                                                                                                         │ embed-certs-123864           │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p default-k8s-diff-port-999693                                                                                                                                                                                                               │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ delete  │ -p default-k8s-diff-port-999693                                                                                                                                                                                                               │ default-k8s-diff-port-999693 │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ addons  │ enable dashboard -p newest-cni-190708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:53 UTC │
	│ start   │ -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:53 UTC │ 19 Oct 25 12:54 UTC │
	│ image   │ newest-cni-190708 image list --format=json                                                                                                                                                                                                    │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:54 UTC │ 19 Oct 25 12:54 UTC │
	│ pause   │ -p newest-cni-190708 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-190708            │ jenkins │ v1.37.0 │ 19 Oct 25 12:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:53:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:53:56.775608  681393 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:53:56.775717  681393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:56.775721  681393 out.go:374] Setting ErrFile to fd 2...
	I1019 12:53:56.775725  681393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:53:56.775932  681393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:53:56.776391  681393 out.go:368] Setting JSON to false
	I1019 12:53:56.777590  681393 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9385,"bootTime":1760869052,"procs":467,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:53:56.777708  681393 start.go:141] virtualization: kvm guest
	I1019 12:53:56.779545  681393 out.go:179] * [newest-cni-190708] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:53:56.780810  681393 notify.go:220] Checking for updates...
	I1019 12:53:56.780833  681393 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:53:56.782177  681393 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:53:56.783534  681393 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:53:56.784688  681393 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:53:56.785720  681393 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:53:56.786991  681393 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:53:56.788412  681393 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:53:56.788950  681393 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:53:56.812916  681393 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:53:56.813017  681393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:56.872433  681393 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-19 12:53:56.862141078 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:56.872541  681393 docker.go:318] overlay module found
	I1019 12:53:56.874130  681393 out.go:179] * Using the docker driver based on existing profile
	I1019 12:53:56.875367  681393 start.go:305] selected driver: docker
	I1019 12:53:56.875380  681393 start.go:925] validating driver "docker" against &{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:56.875474  681393 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:53:56.876018  681393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:53:56.932120  681393 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-19 12:53:56.922313914 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:53:56.932411  681393 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:53:56.932460  681393 cni.go:84] Creating CNI manager for ""
	I1019 12:53:56.932513  681393 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:53:56.932548  681393 start.go:349] cluster config:
	{Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:53:56.934449  681393 out.go:179] * Starting "newest-cni-190708" primary control-plane node in "newest-cni-190708" cluster
	I1019 12:53:56.935660  681393 cache.go:123] Beginning downloading kic base image for docker with crio
	I1019 12:53:56.936901  681393 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1019 12:53:56.938144  681393 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:53:56.938193  681393 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 12:53:56.938216  681393 cache.go:58] Caching tarball of preloaded images
	I1019 12:53:56.938250  681393 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1019 12:53:56.938337  681393 preload.go:233] Found /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 12:53:56.938354  681393 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 12:53:56.938505  681393 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:56.959123  681393 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1019 12:53:56.959143  681393 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1019 12:53:56.959163  681393 cache.go:232] Successfully downloaded all kic artifacts
	I1019 12:53:56.959196  681393 start.go:360] acquireMachinesLock for newest-cni-190708: {Name:mk77ff67117e187a78edba04cd47af082236de6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 12:53:56.959267  681393 start.go:364] duration metric: took 47.819µs to acquireMachinesLock for "newest-cni-190708"
	I1019 12:53:56.959289  681393 start.go:96] Skipping create...Using existing machine configuration
	I1019 12:53:56.959298  681393 fix.go:54] fixHost starting: 
	I1019 12:53:56.959560  681393 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:56.977861  681393 fix.go:112] recreateIfNeeded on newest-cni-190708: state=Stopped err=<nil>
	W1019 12:53:56.977905  681393 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 12:53:56.979572  681393 out.go:252] * Restarting existing docker container for "newest-cni-190708" ...
	I1019 12:53:56.979657  681393 cli_runner.go:164] Run: docker start newest-cni-190708
	I1019 12:53:57.213228  681393 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:53:57.231903  681393 kic.go:430] container "newest-cni-190708" state is running.
	I1019 12:53:57.232257  681393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:53:57.249491  681393 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/config.json ...
	I1019 12:53:57.249703  681393 machine.go:93] provisionDockerMachine start ...
	I1019 12:53:57.249775  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:53:57.268057  681393 main.go:141] libmachine: Using SSH client type: native
	I1019 12:53:57.268291  681393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1019 12:53:57.268303  681393 main.go:141] libmachine: About to run SSH command:
	hostname
	I1019 12:53:57.268868  681393 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48476->127.0.0.1:33505: read: connection reset by peer
	I1019 12:54:00.401466  681393 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:54:00.401494  681393 ubuntu.go:182] provisioning hostname "newest-cni-190708"
	I1019 12:54:00.401560  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:00.419699  681393 main.go:141] libmachine: Using SSH client type: native
	I1019 12:54:00.419917  681393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1019 12:54:00.419935  681393 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-190708 && echo "newest-cni-190708" | sudo tee /etc/hostname
	I1019 12:54:00.560830  681393 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-190708
	
	I1019 12:54:00.560907  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:00.579137  681393 main.go:141] libmachine: Using SSH client type: native
	I1019 12:54:00.579368  681393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1019 12:54:00.579386  681393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-190708' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-190708/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-190708' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 12:54:00.710049  681393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1019 12:54:00.710090  681393 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-351705/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-351705/.minikube}
	I1019 12:54:00.710118  681393 ubuntu.go:190] setting up certificates
	I1019 12:54:00.710138  681393 provision.go:84] configureAuth start
	I1019 12:54:00.710193  681393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:54:00.728520  681393 provision.go:143] copyHostCerts
	I1019 12:54:00.728576  681393 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem, removing ...
	I1019 12:54:00.728592  681393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem
	I1019 12:54:00.728669  681393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/ca.pem (1082 bytes)
	I1019 12:54:00.728778  681393 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem, removing ...
	I1019 12:54:00.728791  681393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem
	I1019 12:54:00.728820  681393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/cert.pem (1123 bytes)
	I1019 12:54:00.728884  681393 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem, removing ...
	I1019 12:54:00.728892  681393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem
	I1019 12:54:00.728914  681393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-351705/.minikube/key.pem (1675 bytes)
	I1019 12:54:00.728965  681393 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem org=jenkins.newest-cni-190708 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-190708]
	I1019 12:54:00.840750  681393 provision.go:177] copyRemoteCerts
	I1019 12:54:00.840814  681393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 12:54:00.840860  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:00.858722  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:00.953617  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 12:54:00.970722  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1019 12:54:00.987192  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1019 12:54:01.004703  681393 provision.go:87] duration metric: took 294.550831ms to configureAuth
	I1019 12:54:01.004731  681393 ubuntu.go:206] setting minikube options for container-runtime
	I1019 12:54:01.004897  681393 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:54:01.004996  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:01.022527  681393 main.go:141] libmachine: Using SSH client type: native
	I1019 12:54:01.022779  681393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x83ffc0] 0x842ca0 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1019 12:54:01.022801  681393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 12:54:01.274564  681393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 12:54:01.274587  681393 machine.go:96] duration metric: took 4.024870686s to provisionDockerMachine
	I1019 12:54:01.274599  681393 start.go:293] postStartSetup for "newest-cni-190708" (driver="docker")
	I1019 12:54:01.274610  681393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 12:54:01.274672  681393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 12:54:01.274722  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:01.293127  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:01.388852  681393 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 12:54:01.392349  681393 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1019 12:54:01.392383  681393 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1019 12:54:01.392396  681393 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/addons for local assets ...
	I1019 12:54:01.392460  681393 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-351705/.minikube/files for local assets ...
	I1019 12:54:01.392539  681393 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem -> 3552622.pem in /etc/ssl/certs
	I1019 12:54:01.392635  681393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 12:54:01.400101  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:54:01.416948  681393 start.go:296] duration metric: took 142.318322ms for postStartSetup
	I1019 12:54:01.417033  681393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:54:01.417072  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:01.435357  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:01.527595  681393 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1019 12:54:01.532162  681393 fix.go:56] duration metric: took 4.572855524s for fixHost
	I1019 12:54:01.532184  681393 start.go:83] releasing machines lock for "newest-cni-190708", held for 4.572906009s
	I1019 12:54:01.532238  681393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-190708
	I1019 12:54:01.550148  681393 ssh_runner.go:195] Run: cat /version.json
	I1019 12:54:01.550194  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:01.550266  681393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 12:54:01.550354  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:01.570055  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:01.570083  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:01.713249  681393 ssh_runner.go:195] Run: systemctl --version
	I1019 12:54:01.719964  681393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 12:54:01.754614  681393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 12:54:01.759320  681393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 12:54:01.759393  681393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 12:54:01.767169  681393 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 12:54:01.767197  681393 start.go:495] detecting cgroup driver to use...
	I1019 12:54:01.767227  681393 detect.go:190] detected "systemd" cgroup driver on host os
	I1019 12:54:01.767268  681393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 12:54:01.781037  681393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 12:54:01.792481  681393 docker.go:218] disabling cri-docker service (if available) ...
	I1019 12:54:01.792537  681393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 12:54:01.806029  681393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 12:54:01.817933  681393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 12:54:01.894876  681393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 12:54:01.974621  681393 docker.go:234] disabling docker service ...
	I1019 12:54:01.974693  681393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 12:54:01.988467  681393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 12:54:02.000269  681393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 12:54:02.079762  681393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 12:54:02.159767  681393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 12:54:02.171908  681393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 12:54:02.185186  681393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 12:54:02.185253  681393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.193853  681393 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1019 12:54:02.193918  681393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.202248  681393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.210631  681393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.219032  681393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 12:54:02.226960  681393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.235483  681393 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.243649  681393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 12:54:02.251981  681393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 12:54:02.259097  681393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 12:54:02.266239  681393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:54:02.346133  681393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 12:54:02.453130  681393 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 12:54:02.453195  681393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 12:54:02.457135  681393 start.go:563] Will wait 60s for crictl version
	I1019 12:54:02.457194  681393 ssh_runner.go:195] Run: which crictl
	I1019 12:54:02.460691  681393 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1019 12:54:02.484198  681393 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1019 12:54:02.484286  681393 ssh_runner.go:195] Run: crio --version
	I1019 12:54:02.512250  681393 ssh_runner.go:195] Run: crio --version
	I1019 12:54:02.541370  681393 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1019 12:54:02.542404  681393 cli_runner.go:164] Run: docker network inspect newest-cni-190708 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1019 12:54:02.559907  681393 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1019 12:54:02.564090  681393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:54:02.575776  681393 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1019 12:54:02.576712  681393 kubeadm.go:883] updating cluster {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 12:54:02.576832  681393 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 12:54:02.576895  681393 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:54:02.609310  681393 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:54:02.609334  681393 crio.go:433] Images already preloaded, skipping extraction
	I1019 12:54:02.609391  681393 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 12:54:02.635192  681393 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 12:54:02.635214  681393 cache_images.go:85] Images are preloaded, skipping loading
	I1019 12:54:02.635223  681393 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1019 12:54:02.635356  681393 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-190708 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 12:54:02.635502  681393 ssh_runner.go:195] Run: crio config
	I1019 12:54:02.681745  681393 cni.go:84] Creating CNI manager for ""
	I1019 12:54:02.681766  681393 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1019 12:54:02.681784  681393 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1019 12:54:02.681812  681393 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-190708 NodeName:newest-cni-190708 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 12:54:02.681979  681393 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-190708"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 12:54:02.682055  681393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 12:54:02.690198  681393 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 12:54:02.690257  681393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 12:54:02.697792  681393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1019 12:54:02.710462  681393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 12:54:02.722342  681393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1019 12:54:02.734823  681393 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1019 12:54:02.738286  681393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 12:54:02.747667  681393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:54:02.827413  681393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:54:02.848669  681393 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708 for IP: 192.168.94.2
	I1019 12:54:02.848695  681393 certs.go:195] generating shared ca certs ...
	I1019 12:54:02.848716  681393 certs.go:227] acquiring lock for ca certs: {Name:mka03c76cbafaf19a8f99018f66c27f5f0254883 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:54:02.848893  681393 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key
	I1019 12:54:02.848941  681393 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key
	I1019 12:54:02.848957  681393 certs.go:257] generating profile certs ...
	I1019 12:54:02.849087  681393 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/client.key
	I1019 12:54:02.849173  681393 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key.6779a6bd
	I1019 12:54:02.849226  681393 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key
	I1019 12:54:02.849370  681393 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem (1338 bytes)
	W1019 12:54:02.849411  681393 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262_empty.pem, impossibly tiny 0 bytes
	I1019 12:54:02.849441  681393 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 12:54:02.849476  681393 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/ca.pem (1082 bytes)
	I1019 12:54:02.849507  681393 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/cert.pem (1123 bytes)
	I1019 12:54:02.849535  681393 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/certs/key.pem (1675 bytes)
	I1019 12:54:02.849611  681393 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem (1708 bytes)
	I1019 12:54:02.850184  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 12:54:02.868123  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1019 12:54:02.885834  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 12:54:02.905665  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 12:54:02.929969  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1019 12:54:02.948044  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 12:54:02.964295  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 12:54:02.980557  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/newest-cni-190708/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 12:54:02.996996  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 12:54:03.013624  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/certs/355262.pem --> /usr/share/ca-certificates/355262.pem (1338 bytes)
	I1019 12:54:03.029910  681393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/ssl/certs/3552622.pem --> /usr/share/ca-certificates/3552622.pem (1708 bytes)
	I1019 12:54:03.047274  681393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 12:54:03.059300  681393 ssh_runner.go:195] Run: openssl version
	I1019 12:54:03.065270  681393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 12:54:03.073092  681393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:54:03.076663  681393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 12:05 /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:54:03.076721  681393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 12:54:03.110329  681393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 12:54:03.118839  681393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/355262.pem && ln -fs /usr/share/ca-certificates/355262.pem /etc/ssl/certs/355262.pem"
	I1019 12:54:03.126968  681393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/355262.pem
	I1019 12:54:03.130523  681393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 12:11 /usr/share/ca-certificates/355262.pem
	I1019 12:54:03.130574  681393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/355262.pem
	I1019 12:54:03.163850  681393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/355262.pem /etc/ssl/certs/51391683.0"
	I1019 12:54:03.171916  681393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3552622.pem && ln -fs /usr/share/ca-certificates/3552622.pem /etc/ssl/certs/3552622.pem"
	I1019 12:54:03.179859  681393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3552622.pem
	I1019 12:54:03.183412  681393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 12:11 /usr/share/ca-certificates/3552622.pem
	I1019 12:54:03.183471  681393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3552622.pem
	I1019 12:54:03.217980  681393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3552622.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 12:54:03.226163  681393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 12:54:03.230201  681393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 12:54:03.264575  681393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 12:54:03.298526  681393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 12:54:03.332667  681393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 12:54:03.380870  681393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 12:54:03.426214  681393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 12:54:03.478042  681393 kubeadm.go:400] StartCluster: {Name:newest-cni-190708 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-190708 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:54:03.478167  681393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 12:54:03.478258  681393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 12:54:03.518881  681393 cri.go:89] found id: "4ef96fcd55a50ba226b906beb6d33a69d08d927c4bcaf88048d22b93a8921426"
	I1019 12:54:03.518926  681393 cri.go:89] found id: "f130d56dff95e348873fd450dec53a547f2bc78e4e6bc98ac4c2129ea4e39792"
	I1019 12:54:03.518932  681393 cri.go:89] found id: "3de424704aaddbaac7b2e42e4afd14146505b46ff0d69e09c79df496bc1abdd1"
	I1019 12:54:03.518936  681393 cri.go:89] found id: "4b4056b243fccafdf386a0031d8daadb87ab333c9c1633214d96ae3559fe3343"
	I1019 12:54:03.518940  681393 cri.go:89] found id: ""
	I1019 12:54:03.518989  681393 ssh_runner.go:195] Run: sudo runc list -f json
	W1019 12:54:03.532091  681393 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:54:03Z" level=error msg="open /run/runc: no such file or directory"
	I1019 12:54:03.532163  681393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 12:54:03.540253  681393 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1019 12:54:03.540276  681393 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1019 12:54:03.540323  681393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 12:54:03.548044  681393 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:54:03.548555  681393 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-190708" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:54:03.548684  681393 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-351705/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-190708" cluster setting kubeconfig missing "newest-cni-190708" context setting]
	I1019 12:54:03.549041  681393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:54:03.550661  681393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 12:54:03.558367  681393 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1019 12:54:03.558398  681393 kubeadm.go:601] duration metric: took 18.115394ms to restartPrimaryControlPlane
	I1019 12:54:03.558407  681393 kubeadm.go:402] duration metric: took 80.382599ms to StartCluster
	I1019 12:54:03.558455  681393 settings.go:142] acquiring lock: {Name:mk65d9852eeded65ce0706143b042bc523ab5b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:54:03.558521  681393 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:54:03.559220  681393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-351705/kubeconfig: {Name:mk23de25dee01e1f126fd6f3b9feb2c904fbe1fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 12:54:03.559503  681393 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 12:54:03.559608  681393 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 12:54:03.559722  681393 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-190708"
	I1019 12:54:03.559733  681393 config.go:182] Loaded profile config "newest-cni-190708": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:54:03.559750  681393 addons.go:69] Setting dashboard=true in profile "newest-cni-190708"
	I1019 12:54:03.559768  681393 addons.go:238] Setting addon dashboard=true in "newest-cni-190708"
	I1019 12:54:03.559741  681393 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-190708"
	W1019 12:54:03.559775  681393 addons.go:247] addon dashboard should already be in state true
	I1019 12:54:03.559775  681393 addons.go:69] Setting default-storageclass=true in profile "newest-cni-190708"
	W1019 12:54:03.559785  681393 addons.go:247] addon storage-provisioner should already be in state true
	I1019 12:54:03.559806  681393 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-190708"
	I1019 12:54:03.559809  681393 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:54:03.559810  681393 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:54:03.560110  681393 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:54:03.560221  681393 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:54:03.560281  681393 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:54:03.561718  681393 out.go:179] * Verifying Kubernetes components...
	I1019 12:54:03.563115  681393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 12:54:03.584973  681393 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 12:54:03.586142  681393 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1019 12:54:03.586190  681393 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:54:03.586206  681393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 12:54:03.586261  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:03.586835  681393 addons.go:238] Setting addon default-storageclass=true in "newest-cni-190708"
	W1019 12:54:03.586855  681393 addons.go:247] addon default-storageclass should already be in state true
	I1019 12:54:03.586898  681393 host.go:66] Checking if "newest-cni-190708" exists ...
	I1019 12:54:03.587535  681393 cli_runner.go:164] Run: docker container inspect newest-cni-190708 --format={{.State.Status}}
	I1019 12:54:03.588450  681393 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1019 12:54:03.589524  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1019 12:54:03.589540  681393 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1019 12:54:03.589602  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:03.616491  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:03.619375  681393 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 12:54:03.619401  681393 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 12:54:03.619476  681393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-190708
	I1019 12:54:03.621166  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:03.643831  681393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/newest-cni-190708/id_rsa Username:docker}
	I1019 12:54:03.702635  681393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 12:54:03.715530  681393 api_server.go:52] waiting for apiserver process to appear ...
	I1019 12:54:03.715609  681393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:54:03.728115  681393 api_server.go:72] duration metric: took 168.575992ms to wait for apiserver process to appear ...
	I1019 12:54:03.728157  681393 api_server.go:88] waiting for apiserver healthz status ...
	I1019 12:54:03.728179  681393 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:54:03.732211  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1019 12:54:03.732233  681393 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1019 12:54:03.735855  681393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 12:54:03.746173  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1019 12:54:03.746195  681393 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1019 12:54:03.752527  681393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 12:54:03.760397  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1019 12:54:03.760453  681393 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1019 12:54:03.775150  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1019 12:54:03.775175  681393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1019 12:54:03.789276  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1019 12:54:03.789301  681393 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1019 12:54:03.808071  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1019 12:54:03.808127  681393 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1019 12:54:03.823049  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1019 12:54:03.823078  681393 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1019 12:54:03.835514  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1019 12:54:03.835565  681393 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1019 12:54:03.847751  681393 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:54:03.847773  681393 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1019 12:54:03.860143  681393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1019 12:54:05.083631  681393 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 12:54:05.083660  681393 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 12:54:05.083683  681393 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:54:05.089539  681393 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 12:54:05.089566  681393 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 12:54:05.228580  681393 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:54:05.235398  681393 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:54:05.235442  681393 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:54:05.580157  681393 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.844265474s)
	I1019 12:54:05.580202  681393 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.827651061s)
	I1019 12:54:05.580324  681393 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.720148239s)
	I1019 12:54:05.582047  681393 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-190708 addons enable metrics-server
	
	I1019 12:54:05.590505  681393 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1019 12:54:05.591606  681393 addons.go:514] duration metric: took 2.032013086s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1019 12:54:05.728556  681393 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:54:05.732628  681393 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:54:05.732651  681393 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:54:06.228929  681393 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:54:06.234061  681393 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 12:54:06.234091  681393 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 12:54:06.728500  681393 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1019 12:54:06.732762  681393 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1019 12:54:06.733733  681393 api_server.go:141] control plane version: v1.34.1
	I1019 12:54:06.733757  681393 api_server.go:131] duration metric: took 3.005593435s to wait for apiserver health ...
	I1019 12:54:06.733769  681393 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 12:54:06.737374  681393 system_pods.go:59] 8 kube-system pods found
	I1019 12:54:06.737409  681393 system_pods.go:61] "coredns-66bc5c9577-kp55x" [9a472ee8-8fcb-410c-92d0-6f82b4bacad7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:54:06.737437  681393 system_pods.go:61] "etcd-newest-cni-190708" [2105393f-0676-49e0-aa1c-5efd62f5148c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 12:54:06.737450  681393 system_pods.go:61] "kindnet-8bb9r" [eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1019 12:54:06.737459  681393 system_pods.go:61] "kube-apiserver-newest-cni-190708" [6f2a10a0-1e97-46ef-831c-c648f1ead906] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 12:54:06.737472  681393 system_pods.go:61] "kube-controller-manager-newest-cni-190708" [2fd054d9-c518-4415-8279-b247bb13d91c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 12:54:06.737487  681393 system_pods.go:61] "kube-proxy-v7xgj" [9620c4c3-352a-4d93-8d43-f7a06fcd3374] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 12:54:06.737498  681393 system_pods.go:61] "kube-scheduler-newest-cni-190708" [8d1175ee-58dc-471d-856b-87d65a82c0c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 12:54:06.737502  681393 system_pods.go:61] "storage-provisioner" [d9659c6a-9cea-4234-aaf7-baafb55fcf58] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1019 12:54:06.737509  681393 system_pods.go:74] duration metric: took 3.731671ms to wait for pod list to return data ...
	I1019 12:54:06.737519  681393 default_sa.go:34] waiting for default service account to be created ...
	I1019 12:54:06.739826  681393 default_sa.go:45] found service account: "default"
	I1019 12:54:06.739846  681393 default_sa.go:55] duration metric: took 2.320798ms for default service account to be created ...
	I1019 12:54:06.739856  681393 kubeadm.go:586] duration metric: took 3.180324861s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1019 12:54:06.739884  681393 node_conditions.go:102] verifying NodePressure condition ...
	I1019 12:54:06.742226  681393 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1019 12:54:06.742247  681393 node_conditions.go:123] node cpu capacity is 8
	I1019 12:54:06.742257  681393 node_conditions.go:105] duration metric: took 2.365715ms to run NodePressure ...
	I1019 12:54:06.742271  681393 start.go:241] waiting for startup goroutines ...
	I1019 12:54:06.742283  681393 start.go:246] waiting for cluster config update ...
	I1019 12:54:06.742300  681393 start.go:255] writing updated cluster config ...
	I1019 12:54:06.742610  681393 ssh_runner.go:195] Run: rm -f paused
	I1019 12:54:06.792772  681393 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 12:54:06.794287  681393 out.go:179] * Done! kubectl is now configured to use "newest-cni-190708" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.218739231Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.222308953Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e8950363-3dfb-4fad-94d3-869631d6c8d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.223098736Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=81ab7e9d-7a6a-4196-a1fa-02481f77de5c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.224367164Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.224846497Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.225358928Z" level=info msg="Ran pod sandbox ccaaa82cb15396e216269b1702b7caa21376cf945d66507f0a94c38b4e7fdd03 with infra container: kube-system/kube-proxy-v7xgj/POD" id=e8950363-3dfb-4fad-94d3-869631d6c8d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.22568181Z" level=info msg="Ran pod sandbox 7bff8ebb6c54ecb35cc91bdfe197c8bb5cc395fc8fa5632dd352223b87fbc571 with infra container: kube-system/kindnet-8bb9r/POD" id=81ab7e9d-7a6a-4196-a1fa-02481f77de5c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.22638993Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=822922b9-5693-47a3-9583-bd0b9d16f0af name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.226695713Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=edf40a55-0593-46cb-aa8b-0e5b45ca3e4b name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.227338661Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=87390921-c9e6-4d5f-85e6-3f03f6376ebd name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.227605615Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=249db88e-6ac6-4269-a770-0cd88d3480d3 name=/runtime.v1.ImageService/ImageStatus
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.228462432Z" level=info msg="Creating container: kube-system/kube-proxy-v7xgj/kube-proxy" id=97720fc4-c225-4595-9b34-bda657d022cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.22859495Z" level=info msg="Creating container: kube-system/kindnet-8bb9r/kindnet-cni" id=17d54139-2e24-4638-a863-221511b00834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.228717336Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.22878347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.233445363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.234121224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.234283704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.234855787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.259774414Z" level=info msg="Created container 65e4d07efdb1afbf1f081e63b94484253313769ab6dd517487bb2b509bfd0ce5: kube-system/kindnet-8bb9r/kindnet-cni" id=17d54139-2e24-4638-a863-221511b00834 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.260401087Z" level=info msg="Starting container: 65e4d07efdb1afbf1f081e63b94484253313769ab6dd517487bb2b509bfd0ce5" id=cef45367-8d81-48a4-8cde-958273c5beb2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.262146106Z" level=info msg="Started container" PID=1041 containerID=65e4d07efdb1afbf1f081e63b94484253313769ab6dd517487bb2b509bfd0ce5 description=kube-system/kindnet-8bb9r/kindnet-cni id=cef45367-8d81-48a4-8cde-958273c5beb2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7bff8ebb6c54ecb35cc91bdfe197c8bb5cc395fc8fa5632dd352223b87fbc571
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.264265455Z" level=info msg="Created container 4e251df0a1d9dd255f0f5618a28848aefc6ca1c783d044e3bad0f7982f108c5d: kube-system/kube-proxy-v7xgj/kube-proxy" id=97720fc4-c225-4595-9b34-bda657d022cb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.264897563Z" level=info msg="Starting container: 4e251df0a1d9dd255f0f5618a28848aefc6ca1c783d044e3bad0f7982f108c5d" id=4c3d7009-a6bd-4c65-a2be-9293708adb27 name=/runtime.v1.RuntimeService/StartContainer
	Oct 19 12:54:06 newest-cni-190708 crio[519]: time="2025-10-19T12:54:06.267584827Z" level=info msg="Started container" PID=1042 containerID=4e251df0a1d9dd255f0f5618a28848aefc6ca1c783d044e3bad0f7982f108c5d description=kube-system/kube-proxy-v7xgj/kube-proxy id=4c3d7009-a6bd-4c65-a2be-9293708adb27 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ccaaa82cb15396e216269b1702b7caa21376cf945d66507f0a94c38b4e7fdd03
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	65e4d07efdb1a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   7bff8ebb6c54e       kindnet-8bb9r                               kube-system
	4e251df0a1d9d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   ccaaa82cb1539       kube-proxy-v7xgj                            kube-system
	4ef96fcd55a50       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   54e495ddaf984       kube-apiserver-newest-cni-190708            kube-system
	f130d56dff95e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   bdf001e612967       kube-scheduler-newest-cni-190708            kube-system
	3de424704aadd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   de292fed6fb10       kube-controller-manager-newest-cni-190708   kube-system
	4b4056b243fcc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   70641c42922dc       etcd-newest-cni-190708                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-190708
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-190708
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad38febc9208a6161a33b404ac6dc7da615b3a99
	                    minikube.k8s.io/name=newest-cni-190708
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T12_53_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 12:53:28 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-190708
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 12:54:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 12:54:05 +0000   Sun, 19 Oct 2025 12:53:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 12:54:05 +0000   Sun, 19 Oct 2025 12:53:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 12:54:05 +0000   Sun, 19 Oct 2025 12:53:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 19 Oct 2025 12:54:05 +0000   Sun, 19 Oct 2025 12:53:26 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-190708
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863448Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                4573dffe-685a-448f-8daf-99deda56b058
	  Boot ID:                    93e478ab-07ca-4902-a86b-2f0ac4ca7900
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-190708                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-8bb9r                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      36s
	  kube-system                 kube-apiserver-newest-cni-190708             250m (3%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-newest-cni-190708    200m (2%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-v7xgj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-scheduler-newest-cni-190708             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 35s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node newest-cni-190708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node newest-cni-190708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node newest-cni-190708 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node newest-cni-190708 event: Registered Node newest-cni-190708 in Controller
	  Normal  Starting                 10s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10s (x2 over 10s)  kubelet          Node newest-cni-190708 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x2 over 10s)  kubelet          Node newest-cni-190708 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x2 over 10s)  kubelet          Node newest-cni-190708 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-190708 event: Registered Node newest-cni-190708 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +0.026333] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 c8 53 2b a9 c4 08 06
	[Oct19 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000018] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +8.073531] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 36 5e 5a e5 25 69 08 06
	[  +0.000376] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 63 ab 39 64 36 08 06
	[  +6.178294] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba c8 4e 5e 5e f3 08 06
	[  +0.000349] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 f6 b0 1c 3a a0 08 06
	[  +1.351703] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[  +6.835901] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	[ +12.836459] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000019] ll header: 00000000: ff ff ff ff ff ff a6 a4 d6 6a 69 59 08 06
	[  +0.000428] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 37 9c 27 74 bd 08 06
	[Oct19 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 6f b4 a9 0f 35 08 06
	[  +0.000426] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 02 fe 1c 48 45 08 06
	
	
	==> etcd [4b4056b243fccafdf386a0031d8daadb87ab333c9c1633214d96ae3559fe3343] <==
	{"level":"warn","ts":"2025-10-19T12:54:04.504629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.512551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.518565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.524395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.530291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.536076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.542071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.547867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.554013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.559909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.568448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.574956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.581548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.587711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.593774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.599769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.605682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.612102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.617985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.623916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.630299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.646340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.652100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.657951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T12:54:04.701874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49372","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:54:12 up  2:36,  0 user,  load average: 2.29, 4.11, 2.98
	Linux newest-cni-190708 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [65e4d07efdb1afbf1f081e63b94484253313769ab6dd517487bb2b509bfd0ce5] <==
	I1019 12:54:06.383309       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1019 12:54:06.383572       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1019 12:54:06.383720       1 main.go:148] setting mtu 1500 for CNI 
	I1019 12:54:06.383738       1 main.go:178] kindnetd IP family: "ipv4"
	I1019 12:54:06.383767       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-19T12:54:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1019 12:54:06.582559       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1019 12:54:06.582642       1 controller.go:381] "Waiting for informer caches to sync"
	I1019 12:54:06.582662       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1019 12:54:06.678696       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1019 12:54:07.079095       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1019 12:54:07.079129       1 metrics.go:72] Registering metrics
	I1019 12:54:07.079220       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [4ef96fcd55a50ba226b906beb6d33a69d08d927c4bcaf88048d22b93a8921426] <==
	I1019 12:54:05.163106       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1019 12:54:05.163163       1 aggregator.go:171] initial CRD sync complete...
	I1019 12:54:05.163173       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 12:54:05.163178       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 12:54:05.163184       1 cache.go:39] Caches are synced for autoregister controller
	I1019 12:54:05.163199       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1019 12:54:05.163347       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 12:54:05.163745       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 12:54:05.163766       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 12:54:05.163904       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 12:54:05.169246       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 12:54:05.174987       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 12:54:05.185511       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 12:54:05.399936       1 controller.go:667] quota admission added evaluator for: namespaces
	I1019 12:54:05.426572       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 12:54:05.444638       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 12:54:05.450906       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 12:54:05.457571       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 12:54:05.486587       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.29.89"}
	I1019 12:54:05.495640       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.57.240"}
	I1019 12:54:06.066739       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 12:54:08.922369       1 controller.go:667] quota admission added evaluator for: endpoints
	I1019 12:54:08.972804       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 12:54:09.022934       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 12:54:09.022934       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3de424704aaddbaac7b2e42e4afd14146505b46ff0d69e09c79df496bc1abdd1] <==
	I1019 12:54:08.519184       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 12:54:08.519271       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 12:54:08.519297       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1019 12:54:08.519385       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 12:54:08.519488       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 12:54:08.519686       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1019 12:54:08.519686       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 12:54:08.519880       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1019 12:54:08.520549       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1019 12:54:08.520602       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1019 12:54:08.521739       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 12:54:08.521754       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 12:54:08.524971       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1019 12:54:08.525019       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1019 12:54:08.525063       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1019 12:54:08.525074       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1019 12:54:08.525082       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1019 12:54:08.525130       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 12:54:08.525148       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 12:54:08.525156       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 12:54:08.527977       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1019 12:54:08.530464       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 12:54:08.535844       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1019 12:54:08.539090       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 12:54:08.541329       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4e251df0a1d9dd255f0f5618a28848aefc6ca1c783d044e3bad0f7982f108c5d] <==
	I1019 12:54:06.300727       1 server_linux.go:53] "Using iptables proxy"
	I1019 12:54:06.360086       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 12:54:06.461027       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 12:54:06.461079       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1019 12:54:06.461183       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 12:54:06.478901       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1019 12:54:06.478977       1 server_linux.go:132] "Using iptables Proxier"
	I1019 12:54:06.484233       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 12:54:06.484631       1 server.go:527] "Version info" version="v1.34.1"
	I1019 12:54:06.484658       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:54:06.486437       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 12:54:06.486534       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 12:54:06.486570       1 config.go:309] "Starting node config controller"
	I1019 12:54:06.486628       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 12:54:06.486641       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 12:54:06.486666       1 config.go:106] "Starting endpoint slice config controller"
	I1019 12:54:06.486673       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 12:54:06.486417       1 config.go:200] "Starting service config controller"
	I1019 12:54:06.486755       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 12:54:06.587531       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 12:54:06.587530       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 12:54:06.587537       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f130d56dff95e348873fd450dec53a547f2bc78e4e6bc98ac4c2129ea4e39792] <==
	I1019 12:54:04.130912       1 serving.go:386] Generated self-signed cert in-memory
	I1019 12:54:05.124287       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 12:54:05.124325       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 12:54:05.130627       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:54:05.130631       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 12:54:05.130672       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 12:54:05.130673       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 12:54:05.130713       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:54:05.130721       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:54:05.130894       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 12:54:05.130931       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 12:54:05.231210       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 12:54:05.231214       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 12:54:05.231360       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 12:54:04 newest-cni-190708 kubelet[668]: E1019 12:54:04.949604     668 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-190708\" not found" node="newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.112916     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.183163     668 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.183242     668 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.183272     668 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.184146     668 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: E1019 12:54:05.225308     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-190708\" already exists" pod="kube-system/kube-controller-manager-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.225343     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: E1019 12:54:05.234478     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-190708\" already exists" pod="kube-system/kube-scheduler-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.234520     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: E1019 12:54:05.241334     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-190708\" already exists" pod="kube-system/etcd-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.241370     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: E1019 12:54:05.247144     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-190708\" already exists" pod="kube-system/kube-apiserver-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.910817     668 apiserver.go:52] "Watching apiserver"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: I1019 12:54:05.950338     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-190708"
	Oct 19 12:54:05 newest-cni-190708 kubelet[668]: E1019 12:54:05.956811     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-190708\" already exists" pod="kube-system/etcd-newest-cni-190708"
	Oct 19 12:54:06 newest-cni-190708 kubelet[668]: I1019 12:54:06.013516     668 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 12:54:06 newest-cni-190708 kubelet[668]: I1019 12:54:06.086915     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d-cni-cfg\") pod \"kindnet-8bb9r\" (UID: \"eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d\") " pod="kube-system/kindnet-8bb9r"
	Oct 19 12:54:06 newest-cni-190708 kubelet[668]: I1019 12:54:06.086984     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9620c4c3-352a-4d93-8d43-f7a06fcd3374-lib-modules\") pod \"kube-proxy-v7xgj\" (UID: \"9620c4c3-352a-4d93-8d43-f7a06fcd3374\") " pod="kube-system/kube-proxy-v7xgj"
	Oct 19 12:54:06 newest-cni-190708 kubelet[668]: I1019 12:54:06.087078     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9620c4c3-352a-4d93-8d43-f7a06fcd3374-xtables-lock\") pod \"kube-proxy-v7xgj\" (UID: \"9620c4c3-352a-4d93-8d43-f7a06fcd3374\") " pod="kube-system/kube-proxy-v7xgj"
	Oct 19 12:54:06 newest-cni-190708 kubelet[668]: I1019 12:54:06.087305     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d-xtables-lock\") pod \"kindnet-8bb9r\" (UID: \"eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d\") " pod="kube-system/kindnet-8bb9r"
	Oct 19 12:54:06 newest-cni-190708 kubelet[668]: I1019 12:54:06.087340     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d-lib-modules\") pod \"kindnet-8bb9r\" (UID: \"eab1cd8a-3930-42c5-8df0-e3fa3fcb7d4d\") " pod="kube-system/kindnet-8bb9r"
	Oct 19 12:54:07 newest-cni-190708 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 19 12:54:07 newest-cni-190708 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 19 12:54:07 newest-cni-190708 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190708 -n newest-cni-190708
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190708 -n newest-cni-190708: exit status 2 (314.435621ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-190708 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-kp55x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vnv2w kubernetes-dashboard-855c9754f9-vsplk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-190708 describe pod coredns-66bc5c9577-kp55x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vnv2w kubernetes-dashboard-855c9754f9-vsplk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-190708 describe pod coredns-66bc5c9577-kp55x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vnv2w kubernetes-dashboard-855c9754f9-vsplk: exit status 1 (57.884827ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-kp55x" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-vnv2w" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vsplk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-190708 describe pod coredns-66bc5c9577-kp55x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-vnv2w kubernetes-dashboard-855c9754f9-vsplk: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.63s)

                                                
                                    

Test pass (264/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 3.76
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 3.93
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.38
21 TestBinaryMirror 0.8
22 TestOffline 61.91
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 160
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 7.41
48 TestAddons/StoppedEnableDisable 16.65
49 TestCertOptions 30.54
50 TestCertExpiration 211.86
52 TestForceSystemdFlag 28.22
53 TestForceSystemdEnv 31.4
55 TestKVMDriverInstallOrUpdate 1.01
59 TestErrorSpam/setup 20.19
60 TestErrorSpam/start 0.63
61 TestErrorSpam/status 0.91
62 TestErrorSpam/pause 6.62
63 TestErrorSpam/unpause 5.49
64 TestErrorSpam/stop 2.55
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 41.54
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.11
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.73
76 TestFunctional/serial/CacheCmd/cache/add_local 1.18
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.5
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 65.2
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.18
87 TestFunctional/serial/LogsFileCmd 1.21
88 TestFunctional/serial/InvalidService 3.79
90 TestFunctional/parallel/ConfigCmd 0.4
91 TestFunctional/parallel/DashboardCmd 6.93
92 TestFunctional/parallel/DryRun 0.66
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.91
99 TestFunctional/parallel/AddonsCmd 0.15
100 TestFunctional/parallel/PersistentVolumeClaim 23.62
102 TestFunctional/parallel/SSHCmd 0.67
103 TestFunctional/parallel/CpCmd 1.77
104 TestFunctional/parallel/MySQL 15.43
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.63
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
114 TestFunctional/parallel/License 0.42
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.24
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
122 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
128 TestFunctional/parallel/ProfileCmd/profile_list 0.37
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
130 TestFunctional/parallel/MountCmd/any-port 5.47
131 TestFunctional/parallel/MountCmd/specific-port 1.93
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.41
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
137 TestFunctional/parallel/ImageCommands/ImageBuild 6.13
138 TestFunctional/parallel/ImageCommands/Setup 0.95
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
149 TestFunctional/parallel/Version/short 0.05
150 TestFunctional/parallel/Version/components 0.47
151 TestFunctional/parallel/ServiceCmd/List 1.71
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 101.86
164 TestMultiControlPlane/serial/DeployApp 3.84
165 TestMultiControlPlane/serial/PingHostFromPods 0.96
166 TestMultiControlPlane/serial/AddWorkerNode 26.89
167 TestMultiControlPlane/serial/NodeLabels 0.06
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
169 TestMultiControlPlane/serial/CopyFile 16.51
170 TestMultiControlPlane/serial/StopSecondaryNode 14.2
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
172 TestMultiControlPlane/serial/RestartSecondaryNode 14.83
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 106.2
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.54
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
177 TestMultiControlPlane/serial/StopCluster 41.65
178 TestMultiControlPlane/serial/RestartCluster 51.8
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
180 TestMultiControlPlane/serial/AddSecondaryNode 35.14
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
185 TestJSONOutput/start/Command 38.85
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.13
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 27.62
211 TestKicCustomNetwork/use_default_bridge_network 24.12
212 TestKicExistingNetwork 26.86
213 TestKicCustomSubnet 23.98
214 TestKicStaticIP 24.51
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 47.26
219 TestMountStart/serial/StartWithMountFirst 5.28
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 5.34
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.69
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.52
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 60.7
231 TestMultiNode/serial/DeployApp2Nodes 3.37
232 TestMultiNode/serial/PingHostFrom2Pods 0.65
233 TestMultiNode/serial/AddNode 53.79
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.65
236 TestMultiNode/serial/CopyFile 9.38
237 TestMultiNode/serial/StopNode 2.22
238 TestMultiNode/serial/StartAfterStop 7.16
239 TestMultiNode/serial/RestartKeepsNodes 82.11
240 TestMultiNode/serial/DeleteNode 5.21
241 TestMultiNode/serial/StopMultiNode 28.44
242 TestMultiNode/serial/RestartMultiNode 27.35
243 TestMultiNode/serial/ValidateNameConflict 26.72
248 TestPreload 105.56
250 TestScheduledStopUnix 96.29
253 TestInsufficientStorage 9.68
254 TestRunningBinaryUpgrade 48.35
256 TestKubernetesUpgrade 317.34
257 TestMissingContainerUpgrade 90.13
258 TestStoppedBinaryUpgrade/Setup 0.49
259 TestStoppedBinaryUpgrade/Upgrade 73.53
260 TestStoppedBinaryUpgrade/MinikubeLogs 1.07
268 TestNetworkPlugins/group/false 3.58
280 TestPause/serial/Start 39.86
281 TestPause/serial/SecondStartNoReconfiguration 7.91
283 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
284 TestNoKubernetes/serial/StartWithK8s 25.96
286 TestNetworkPlugins/group/auto/Start 41.63
287 TestNoKubernetes/serial/StartWithStopK8s 16.84
288 TestNoKubernetes/serial/Start 4.87
289 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
290 TestNoKubernetes/serial/ProfileList 1.74
291 TestNoKubernetes/serial/Stop 1.25
292 TestNoKubernetes/serial/StartNoArgs 6.4
293 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
294 TestNetworkPlugins/group/auto/KubeletFlags 0.28
295 TestNetworkPlugins/group/auto/NetCatPod 9.19
296 TestNetworkPlugins/group/kindnet/Start 41.27
297 TestNetworkPlugins/group/auto/DNS 0.11
298 TestNetworkPlugins/group/auto/Localhost 0.08
299 TestNetworkPlugins/group/auto/HairPin 0.09
300 TestNetworkPlugins/group/calico/Start 50.03
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
303 TestNetworkPlugins/group/kindnet/NetCatPod 9.25
304 TestNetworkPlugins/group/kindnet/DNS 0.11
305 TestNetworkPlugins/group/kindnet/Localhost 0.1
306 TestNetworkPlugins/group/kindnet/HairPin 0.1
307 TestNetworkPlugins/group/custom-flannel/Start 49.68
308 TestNetworkPlugins/group/calico/ControllerPod 6.01
309 TestNetworkPlugins/group/enable-default-cni/Start 42.37
310 TestNetworkPlugins/group/calico/KubeletFlags 0.32
311 TestNetworkPlugins/group/calico/NetCatPod 9.27
312 TestNetworkPlugins/group/calico/DNS 0.12
313 TestNetworkPlugins/group/calico/Localhost 0.11
314 TestNetworkPlugins/group/flannel/Start 49.27
315 TestNetworkPlugins/group/calico/HairPin 0.11
316 TestNetworkPlugins/group/bridge/Start 67.3
317 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
318 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.28
319 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
320 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.2
321 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
322 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
324 TestNetworkPlugins/group/custom-flannel/DNS 0.11
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
329 TestNetworkPlugins/group/flannel/NetCatPod 8.22
331 TestStartStop/group/old-k8s-version/serial/FirstStart 51.92
332 TestNetworkPlugins/group/flannel/DNS 0.17
333 TestNetworkPlugins/group/flannel/Localhost 0.1
334 TestNetworkPlugins/group/flannel/HairPin 0.1
336 TestStartStop/group/no-preload/serial/FirstStart 52.58
338 TestStartStop/group/embed-certs/serial/FirstStart 71.65
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
340 TestNetworkPlugins/group/bridge/NetCatPod 11.2
341 TestNetworkPlugins/group/bridge/DNS 0.15
342 TestNetworkPlugins/group/bridge/Localhost 0.13
343 TestNetworkPlugins/group/bridge/HairPin 0.15
344 TestStartStop/group/old-k8s-version/serial/DeployApp 9.29
345 TestStartStop/group/no-preload/serial/DeployApp 8.21
347 TestStartStop/group/old-k8s-version/serial/Stop 16.91
349 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.52
351 TestStartStop/group/no-preload/serial/Stop 18.07
352 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
353 TestStartStop/group/old-k8s-version/serial/SecondStart 53.13
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
355 TestStartStop/group/no-preload/serial/SecondStart 43.66
356 TestStartStop/group/embed-certs/serial/DeployApp 7.24
357 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.3
359 TestStartStop/group/embed-certs/serial/Stop 16.11
361 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.2
362 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
363 TestStartStop/group/embed-certs/serial/SecondStart 44.12
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
365 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.63
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
368 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
370 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
372 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
375 TestStartStop/group/newest-cni/serial/FirstStart 25.07
376 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
377 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
378 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/newest-cni/serial/DeployApp 0
381 TestStartStop/group/newest-cni/serial/Stop 17.94
382 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
384 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
385 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
387 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
388 TestStartStop/group/newest-cni/serial/SecondStart 10.39
389 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
x
+
TestDownloadOnly/v1.28.0/json-events (3.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-122372 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-122372 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.758807545s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (3.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1019 12:05:16.107371  355262 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1019 12:05:16.107518  355262 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-122372
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-122372: exit status 85 (62.041861ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-122372 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-122372 │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:05:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:05:12.391982  355275 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:05:12.392257  355275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:05:12.392268  355275 out.go:374] Setting ErrFile to fd 2...
	I1019 12:05:12.392272  355275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:05:12.392520  355275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	W1019 12:05:12.392660  355275 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21772-351705/.minikube/config/config.json: open /home/jenkins/minikube-integration/21772-351705/.minikube/config/config.json: no such file or directory
	I1019 12:05:12.393127  355275 out.go:368] Setting JSON to true
	I1019 12:05:12.394175  355275 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6460,"bootTime":1760869052,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:05:12.394266  355275 start.go:141] virtualization: kvm guest
	I1019 12:05:12.396464  355275 out.go:99] [download-only-122372] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1019 12:05:12.396604  355275 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball: no such file or directory
	I1019 12:05:12.396632  355275 notify.go:220] Checking for updates...
	I1019 12:05:12.397877  355275 out.go:171] MINIKUBE_LOCATION=21772
	I1019 12:05:12.399167  355275 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:05:12.400475  355275 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:05:12.401606  355275 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:05:12.402753  355275 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1019 12:05:12.404826  355275 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 12:05:12.405088  355275 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:05:12.429652  355275 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:05:12.429727  355275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:05:12.485989  355275 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-19 12:05:12.475352048 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:05:12.486098  355275 docker.go:318] overlay module found
	I1019 12:05:12.487693  355275 out.go:99] Using the docker driver based on user configuration
	I1019 12:05:12.487721  355275 start.go:305] selected driver: docker
	I1019 12:05:12.487728  355275 start.go:925] validating driver "docker" against <nil>
	I1019 12:05:12.487809  355275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:05:12.540056  355275 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-19 12:05:12.530717687 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:05:12.540279  355275 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:05:12.541045  355275 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1019 12:05:12.541240  355275 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 12:05:12.542914  355275 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-122372 host does not exist
	  To start a cluster, run: "minikube start -p download-only-122372"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-122372
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-296979 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-296979 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.930994177s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1019 12:05:20.446765  355262 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1019 12:05:20.446812  355262 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-351705/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-296979
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-296979: exit status 85 (61.730002ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-122372 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-122372 │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:05 UTC │
	│ delete  │ -p download-only-122372                                                                                                                                                   │ download-only-122372 │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │ 19 Oct 25 12:05 UTC │
	│ start   │ -o=json --download-only -p download-only-296979 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-296979 │ jenkins │ v1.37.0 │ 19 Oct 25 12:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 12:05:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 12:05:16.558099  355629 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:05:16.558410  355629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:05:16.558445  355629 out.go:374] Setting ErrFile to fd 2...
	I1019 12:05:16.558457  355629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:05:16.558736  355629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:05:16.559233  355629 out.go:368] Setting JSON to true
	I1019 12:05:16.560315  355629 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6465,"bootTime":1760869052,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:05:16.560448  355629 start.go:141] virtualization: kvm guest
	I1019 12:05:16.562197  355629 out.go:99] [download-only-296979] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:05:16.562333  355629 notify.go:220] Checking for updates...
	I1019 12:05:16.563465  355629 out.go:171] MINIKUBE_LOCATION=21772
	I1019 12:05:16.564804  355629 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:05:16.566108  355629 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:05:16.567255  355629 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:05:16.568442  355629 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1019 12:05:16.570494  355629 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 12:05:16.570741  355629 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:05:16.593888  355629 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:05:16.594010  355629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:05:16.649142  355629 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-19 12:05:16.638694327 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:05:16.649264  355629 docker.go:318] overlay module found
	I1019 12:05:16.650960  355629 out.go:99] Using the docker driver based on user configuration
	I1019 12:05:16.650986  355629 start.go:305] selected driver: docker
	I1019 12:05:16.650992  355629 start.go:925] validating driver "docker" against <nil>
	I1019 12:05:16.651072  355629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:05:16.709106  355629 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:31 OomKillDisable:false NGoroutines:54 SystemTime:2025-10-19 12:05:16.698831296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:05:16.709266  355629 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 12:05:16.709746  355629 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1019 12:05:16.709904  355629 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 12:05:16.711746  355629 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-296979 host does not exist
	  To start a cluster, run: "minikube start -p download-only-296979"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-296979
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-580627 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-580627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-580627
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1019 12:05:21.504446  355262 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-904842 --alsologtostderr --binary-mirror http://127.0.0.1:34101 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-904842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-904842
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (61.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-467900 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-467900 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (59.527378105s)
helpers_test.go:175: Cleaning up "offline-crio-467900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-467900
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-467900: (2.380132573s)
--- PASS: TestOffline (61.91s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-042725
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-042725: exit status 85 (58.073069ms)

                                                
                                                
-- stdout --
	* Profile "addons-042725" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-042725"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-042725
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-042725: exit status 85 (60.046371ms)

                                                
                                                
-- stdout --
	* Profile "addons-042725" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-042725"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (160s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-042725 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-042725 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m39.999816288s)
--- PASS: TestAddons/Setup (160.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-042725 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-042725 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.41s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-042725 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-042725 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2c8198e2-f656-4274-b959-45650f1182b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2c8198e2-f656-4274-b959-45650f1182b1] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003321434s
addons_test.go:694: (dbg) Run:  kubectl --context addons-042725 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-042725 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-042725 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-042725
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-042725: (16.395370525s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-042725
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-042725
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-042725
--- PASS: TestAddons/StoppedEnableDisable (16.65s)

                                                
                                    
x
+
TestCertOptions (30.54s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-868990 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-868990 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.837962772s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-868990 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-868990 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-868990 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-868990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-868990
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-868990: (2.889353936s)
--- PASS: TestCertOptions (30.54s)

                                                
                                    
x
+
TestCertExpiration (211.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-599351 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-599351 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (22.486375055s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-599351 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-599351 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.713380986s)
helpers_test.go:175: Cleaning up "cert-expiration-599351" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-599351
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-599351: (2.654246272s)
--- PASS: TestCertExpiration (211.86s)

                                                
                                    
x
+
TestForceSystemdFlag (28.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-278711 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-278711 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.538709509s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-278711 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-278711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-278711
E1019 12:46:05.962476  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-278711: (2.402175335s)
--- PASS: TestForceSystemdFlag (28.22s)

                                                
                                    
x
+
TestForceSystemdEnv (31.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-991840 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-991840 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.978163571s)
helpers_test.go:175: Cleaning up "force-systemd-env-991840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-991840
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-991840: (4.422411581s)
--- PASS: TestForceSystemdEnv (31.40s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.01s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1019 12:45:24.457371  355262 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1019 12:45:24.457568  355262 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate303295690/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1019 12:45:24.486177  355262 install.go:163] /tmp/TestKVMDriverInstallOrUpdate303295690/001/docker-machine-driver-kvm2 version is 1.1.1
W1019 12:45:24.486213  355262 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1019 12:45:24.486301  355262 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1019 12:45:24.486331  355262 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate303295690/001/docker-machine-driver-kvm2
I1019 12:45:25.320561  355262 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate303295690/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1019 12:45:25.335301  355262 install.go:163] /tmp/TestKVMDriverInstallOrUpdate303295690/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.01s)

                                                
                                    
x
+
TestErrorSpam/setup (20.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-939548 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-939548 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-939548 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-939548 --driver=docker  --container-runtime=crio: (20.190872816s)
--- PASS: TestErrorSpam/setup (20.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (6.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 pause: exit status 80 (2.201318874s)

                                                
                                                
-- stdout --
	* Pausing node nospam-939548 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:11:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 pause: exit status 80 (2.120716735s)

                                                
                                                
-- stdout --
	* Pausing node nospam-939548 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:11:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 pause: exit status 80 (2.296474973s)

                                                
                                                
-- stdout --
	* Pausing node nospam-939548 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:11:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 unpause: exit status 80 (2.318424336s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-939548 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:11:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 unpause: exit status 80 (1.678845472s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-939548 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:11:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 unpause: exit status 80 (1.48822081s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-939548 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-19T12:11:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.49s)

                                                
                                    
x
+
TestErrorSpam/stop (2.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 stop: (2.369665424s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-939548 --log_dir /tmp/nospam-939548 stop
--- PASS: TestErrorSpam/stop (2.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21772-351705/.minikube/files/etc/test/nested/copy/355262/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-688409 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-688409 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.541141366s)
--- PASS: TestFunctional/serial/StartWithProxy (41.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1019 12:12:32.986892  355262 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-688409 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-688409 --alsologtostderr -v=8: (6.111951901s)
functional_test.go:678: soft start took 6.112901864s for "functional-688409" cluster.
I1019 12:12:39.099717  355262 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-688409 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-688409 /tmp/TestFunctionalserialCacheCmdcacheadd_local2980245545/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 cache add minikube-local-cache-test:functional-688409
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 cache delete minikube-local-cache-test:functional-688409
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-688409
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (272.411603ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 kubectl -- --context functional-688409 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-688409 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (65.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-688409 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1019 12:13:02.898128  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:02.904486  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:02.915830  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:02.937208  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:02.978589  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:03.060025  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:03.221588  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:03.543248  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:04.184616  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:05.466206  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:08.027576  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:13.149105  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:23.390744  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:13:43.872119  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-688409 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m5.200535747s)
functional_test.go:776: restart took 1m5.200654548s for "functional-688409" cluster.
I1019 12:13:50.504692  355262 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (65.20s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-688409 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-688409 logs: (1.181958693s)
--- PASS: TestFunctional/serial/LogsCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 logs --file /tmp/TestFunctionalserialLogsFileCmd3831012969/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-688409 logs --file /tmp/TestFunctionalserialLogsFileCmd3831012969/001/logs.txt: (1.209916898s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-688409 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-688409
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-688409: exit status 115 (332.415001ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30895 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-688409 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 config get cpus: exit status 14 (94.627666ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 config get cpus: exit status 14 (62.099646ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-688409 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-688409 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 392913: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.93s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-688409 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-688409 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (447.758137ms)

                                                
                                                
-- stdout --
	* [functional-688409] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:14:18.490126  392493 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:14:18.490408  392493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:14:18.490417  392493 out.go:374] Setting ErrFile to fd 2...
	I1019 12:14:18.490436  392493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:14:18.490636  392493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:14:18.491099  392493 out.go:368] Setting JSON to false
	I1019 12:14:18.492145  392493 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7006,"bootTime":1760869052,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:14:18.492244  392493 start.go:141] virtualization: kvm guest
	I1019 12:14:18.504115  392493 out.go:179] * [functional-688409] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:14:18.512362  392493 notify.go:220] Checking for updates...
	I1019 12:14:18.512387  392493 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:14:18.520794  392493 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:14:18.549438  392493 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:14:18.597808  392493 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:14:18.602058  392493 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:14:18.604881  392493 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:14:18.607026  392493 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:14:18.607787  392493 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:14:18.631286  392493 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:14:18.631482  392493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:14:18.692069  392493 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 12:14:18.681722918 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:14:18.692160  392493 docker.go:318] overlay module found
	I1019 12:14:18.748851  392493 out.go:179] * Using the docker driver based on existing profile
	I1019 12:14:18.773063  392493 start.go:305] selected driver: docker
	I1019 12:14:18.773087  392493 start.go:925] validating driver "docker" against &{Name:functional-688409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-688409 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:14:18.773199  392493 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:14:18.824307  392493 out.go:203] 
	W1019 12:14:18.830489  392493 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1019 12:14:18.852533  392493 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-688409 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-688409 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-688409 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (154.627226ms)

                                                
                                                
-- stdout --
	* [functional-688409] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:14:18.338376  392410 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:14:18.338675  392410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:14:18.338685  392410 out.go:374] Setting ErrFile to fd 2...
	I1019 12:14:18.338690  392410 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:14:18.338978  392410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:14:18.339393  392410 out.go:368] Setting JSON to false
	I1019 12:14:18.340398  392410 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7006,"bootTime":1760869052,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:14:18.340508  392410 start.go:141] virtualization: kvm guest
	I1019 12:14:18.342701  392410 out.go:179] * [functional-688409] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1019 12:14:18.344309  392410 notify.go:220] Checking for updates...
	I1019 12:14:18.344313  392410 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:14:18.345629  392410 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:14:18.347132  392410 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:14:18.348539  392410 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:14:18.349741  392410 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:14:18.350989  392410 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:14:18.352527  392410 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:14:18.353055  392410 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:14:18.376510  392410 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:14:18.376602  392410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:14:18.434300  392410 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-19 12:14:18.424030843 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:14:18.434439  392410 docker.go:318] overlay module found
	I1019 12:14:18.436091  392410 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1019 12:14:18.437276  392410 start.go:305] selected driver: docker
	I1019 12:14:18.437292  392410 start.go:925] validating driver "docker" against &{Name:functional-688409 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-688409 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 12:14:18.437408  392410 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:14:18.439211  392410 out.go:203] 
	W1019 12:14:18.440408  392410 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1019 12:14:18.441540  392410 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [fe4b7aea-cfcb-4821-8ded-480c5c5978bc] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004044613s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-688409 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-688409 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-688409 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-688409 apply -f testdata/storage-provisioner/pod.yaml
I1019 12:14:03.771553  355262 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e23bcfcb-562b-40d3-a361-e1c943b8103e] Pending
helpers_test.go:352: "sp-pod" [e23bcfcb-562b-40d3-a361-e1c943b8103e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e23bcfcb-562b-40d3-a361-e1c943b8103e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003066494s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-688409 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-688409 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-688409 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0210217e-46ee-4634-9ed4-071d1d2c08ff] Pending
helpers_test.go:352: "sp-pod" [0210217e-46ee-4634-9ed4-071d1d2c08ff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0210217e-46ee-4634-9ed4-071d1d2c08ff] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003517483s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-688409 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh -n functional-688409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 cp functional-688409:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1490378076/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh -n functional-688409 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh -n functional-688409 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (15.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-688409 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-hhrsg" [4470a0c7-ce12-4d8c-b4ef-ef13918a062a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-hhrsg" [4470a0c7-ce12-4d8c-b4ef-ef13918a062a] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.003307547s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-688409 exec mysql-5bb876957f-hhrsg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-688409 exec mysql-5bb876957f-hhrsg -- mysql -ppassword -e "show databases;": exit status 1 (87.457653ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1019 12:14:41.074148  355262 retry.go:31] will retry after 807.395915ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-688409 exec mysql-5bb876957f-hhrsg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-688409 exec mysql-5bb876957f-hhrsg -- mysql -ppassword -e "show databases;": exit status 1 (85.566561ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1019 12:14:41.968410  355262 retry.go:31] will retry after 1.193328581s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-688409 exec mysql-5bb876957f-hhrsg -- mysql -ppassword -e "show databases;"
E1019 12:15:46.755930  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:18:02.889093  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:18:30.597444  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:23:02.889050  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (15.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/355262/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "sudo cat /etc/test/nested/copy/355262/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/355262.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "sudo cat /etc/ssl/certs/355262.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/355262.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "sudo cat /usr/share/ca-certificates/355262.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3552622.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "sudo cat /etc/ssl/certs/3552622.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3552622.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "sudo cat /usr/share/ca-certificates/3552622.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-688409 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 ssh "sudo systemctl is-active docker": exit status 1 (328.099913ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 ssh "sudo systemctl is-active containerd": exit status 1 (372.73327ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-688409 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-688409 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-688409 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-688409 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 388124: os: process already finished
helpers_test.go:519: unable to terminate pid 387645: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-688409 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-688409 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d09cda05-b86f-49a4-8d30-6e14b375f3f4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d09cda05-b86f-49a4-8d30-6e14b375f3f4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.002922747s
I1019 12:14:06.874529  355262 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-688409 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.38.104 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-688409 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "319.409789ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.74923ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "316.874308ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "50.477042ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-688409 /tmp/TestFunctionalparallelMountCmdany-port31407524/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760876048158134256" to /tmp/TestFunctionalparallelMountCmdany-port31407524/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760876048158134256" to /tmp/TestFunctionalparallelMountCmdany-port31407524/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760876048158134256" to /tmp/TestFunctionalparallelMountCmdany-port31407524/001/test-1760876048158134256
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (270.476292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 12:14:08.428917  355262 retry.go:31] will retry after 326.819794ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 19 12:14 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 19 12:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 19 12:14 test-1760876048158134256
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh cat /mount-9p/test-1760876048158134256
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-688409 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e32a9c77-4fe5-4e82-9547-4dd8aed29401] Pending
helpers_test.go:352: "busybox-mount" [e32a9c77-4fe5-4e82-9547-4dd8aed29401] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [e32a9c77-4fe5-4e82-9547-4dd8aed29401] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e32a9c77-4fe5-4e82-9547-4dd8aed29401] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003955113s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-688409 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-688409 /tmp/TestFunctionalparallelMountCmdany-port31407524/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-688409 /tmp/TestFunctionalparallelMountCmdspecific-port4068513976/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (278.353164ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 12:14:13.902260  355262 retry.go:31] will retry after 648.157196ms: exit status 1
I1019 12:14:13.936291  355262 detect.go:223] nested VM detected
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-688409 /tmp/TestFunctionalparallelMountCmdspecific-port4068513976/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 ssh "sudo umount -f /mount-9p": exit status 1 (257.379269ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-688409 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-688409 /tmp/TestFunctionalparallelMountCmdspecific-port4068513976/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-688409 /tmp/TestFunctionalparallelMountCmdVerifyCleanup755637603/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-688409 /tmp/TestFunctionalparallelMountCmdVerifyCleanup755637603/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-688409 /tmp/TestFunctionalparallelMountCmdVerifyCleanup755637603/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 ssh "findmnt -T" /mount1: exit status 1 (323.072076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 12:14:15.873899  355262 retry.go:31] will retry after 685.820744ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-688409 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-688409 /tmp/TestFunctionalparallelMountCmdVerifyCleanup755637603/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-688409 /tmp/TestFunctionalparallelMountCmdVerifyCleanup755637603/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-688409 /tmp/TestFunctionalparallelMountCmdVerifyCleanup755637603/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-688409 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-688409 image ls --format short --alsologtostderr:
I1019 12:14:28.056696  394771 out.go:360] Setting OutFile to fd 1 ...
I1019 12:14:28.057001  394771 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:14:28.057012  394771 out.go:374] Setting ErrFile to fd 2...
I1019 12:14:28.057016  394771 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:14:28.057281  394771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
I1019 12:14:28.057973  394771 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:14:28.058062  394771 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:14:28.058453  394771 cli_runner.go:164] Run: docker container inspect functional-688409 --format={{.State.Status}}
I1019 12:14:28.076334  394771 ssh_runner.go:195] Run: systemctl --version
I1019 12:14:28.076388  394771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-688409
I1019 12:14:28.093691  394771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/functional-688409/id_rsa Username:docker}
I1019 12:14:28.187190  394771 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-688409 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-688409  │ 53f1e935d6e4a │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-688409 image ls --format table --alsologtostderr:
I1019 12:14:34.831915  395808 out.go:360] Setting OutFile to fd 1 ...
I1019 12:14:34.832188  395808 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:14:34.832200  395808 out.go:374] Setting ErrFile to fd 2...
I1019 12:14:34.832203  395808 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:14:34.832444  395808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
I1019 12:14:34.833069  395808 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:14:34.833177  395808 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:14:34.833594  395808 cli_runner.go:164] Run: docker container inspect functional-688409 --format={{.State.Status}}
I1019 12:14:34.851071  395808 ssh_runner.go:195] Run: systemctl --version
I1019 12:14:34.851116  395808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-688409
I1019 12:14:34.871483  395808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/functional-688409/id_rsa Username:docker}
I1019 12:14:34.970366  395808 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-688409 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","do
cker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf207
0c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf
9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"07ccdb7838758e758a4d
52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["
gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"53f1e935d6e4a028e53e4c498489b702354f71ad3a215d3212b7bb1ec200aecd","repoDigests":["localhost/my-image@sha256:7e366d332e334b0194ef95144ed50bfa23cfcbb63f19bb6b082142e77371e6bd"],"repoTags":["localhost/my-image:functional-688409"],"size":"1468744"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5c64c41451b6afa4162d450e2f45e27dce7945a82645b39f98c83fd55d53984c","repoDigests":["docker.io/lib
rary/734b7e543cacb76edcbef4eab35f8edd44ee9a41aa8be227e6c057b6387a7819-tmp@sha256:0c2e316d7fe40d5ee7ebae2e7de8dc1897725c53b8060d6414d0ddcca9d68545"],"repoTags":[],"size":"1466132"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-688409 image ls --format json --alsologtostderr:
I1019 12:14:34.616221  395725 out.go:360] Setting OutFile to fd 1 ...
I1019 12:14:34.616583  395725 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:14:34.616598  395725 out.go:374] Setting ErrFile to fd 2...
I1019 12:14:34.616602  395725 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:14:34.616795  395725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
I1019 12:14:34.617353  395725 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:14:34.617470  395725 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:14:34.617996  395725 cli_runner.go:164] Run: docker container inspect functional-688409 --format={{.State.Status}}
I1019 12:14:34.636598  395725 ssh_runner.go:195] Run: systemctl --version
I1019 12:14:34.636649  395725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-688409
I1019 12:14:34.653759  395725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/functional-688409/id_rsa Username:docker}
I1019 12:14:34.750335  395725 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-688409 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-688409 image ls --format yaml --alsologtostderr:
I1019 12:14:28.264381  394828 out.go:360] Setting OutFile to fd 1 ...
I1019 12:14:28.264644  394828 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:14:28.264654  394828 out.go:374] Setting ErrFile to fd 2...
I1019 12:14:28.264658  394828 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:14:28.264835  394828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
I1019 12:14:28.265399  394828 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:14:28.265503  394828 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:14:28.265887  394828 cli_runner.go:164] Run: docker container inspect functional-688409 --format={{.State.Status}}
I1019 12:14:28.283317  394828 ssh_runner.go:195] Run: systemctl --version
I1019 12:14:28.283364  394828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-688409
I1019 12:14:28.301384  394828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/functional-688409/id_rsa Username:docker}
I1019 12:14:28.397243  394828 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-688409 ssh pgrep buildkitd: exit status 1 (257.700726ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image build -t localhost/my-image:functional-688409 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-688409 image build -t localhost/my-image:functional-688409 testdata/build --alsologtostderr: (5.657660778s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-688409 image build -t localhost/my-image:functional-688409 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5c64c41451b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-688409
--> 53f1e935d6e
Successfully tagged localhost/my-image:functional-688409
53f1e935d6e4a028e53e4c498489b702354f71ad3a215d3212b7bb1ec200aecd
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-688409 image build -t localhost/my-image:functional-688409 testdata/build --alsologtostderr:
I1019 12:14:28.734750  395008 out.go:360] Setting OutFile to fd 1 ...
I1019 12:14:28.734973  395008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:14:28.734981  395008 out.go:374] Setting ErrFile to fd 2...
I1019 12:14:28.734985  395008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 12:14:28.735172  395008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
I1019 12:14:28.735782  395008 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:14:28.736517  395008 config.go:182] Loaded profile config "functional-688409": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 12:14:28.737001  395008 cli_runner.go:164] Run: docker container inspect functional-688409 --format={{.State.Status}}
I1019 12:14:28.754583  395008 ssh_runner.go:195] Run: systemctl --version
I1019 12:14:28.754635  395008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-688409
I1019 12:14:28.772572  395008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/functional-688409/id_rsa Username:docker}
I1019 12:14:28.866868  395008 build_images.go:161] Building image from path: /tmp/build.2796645539.tar
I1019 12:14:28.866942  395008 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1019 12:14:28.875167  395008 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2796645539.tar
I1019 12:14:28.879085  395008 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2796645539.tar: stat -c "%s %y" /var/lib/minikube/build/build.2796645539.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2796645539.tar': No such file or directory
I1019 12:14:28.879126  395008 ssh_runner.go:362] scp /tmp/build.2796645539.tar --> /var/lib/minikube/build/build.2796645539.tar (3072 bytes)
I1019 12:14:28.896843  395008 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2796645539
I1019 12:14:28.904436  395008 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2796645539 -xf /var/lib/minikube/build/build.2796645539.tar
I1019 12:14:28.913459  395008 crio.go:315] Building image: /var/lib/minikube/build/build.2796645539
I1019 12:14:28.913537  395008 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-688409 /var/lib/minikube/build/build.2796645539 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1019 12:14:34.325500  395008 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-688409 /var/lib/minikube/build/build.2796645539 --cgroup-manager=cgroupfs: (5.411934838s)
I1019 12:14:34.325556  395008 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2796645539
I1019 12:14:34.333740  395008 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2796645539.tar
I1019 12:14:34.341147  395008 build_images.go:217] Built localhost/my-image:functional-688409 from /tmp/build.2796645539.tar
I1019 12:14:34.341180  395008 build_images.go:133] succeeded building to: functional-688409
I1019 12:14:34.341187  395008 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-688409
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image rm kicbase/echo-server:functional-688409 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-688409 service list: (1.705063638s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-688409 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-688409 service list -o json: (1.683990469s)
functional_test.go:1504: Took "1.684104228s" to run "out/minikube-linux-amd64 -p functional-688409 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-688409
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-688409
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-688409
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (101.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-906867 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m41.13770291s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (101.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-906867 kubectl -- rollout status deployment/busybox: (2.001506204s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-22sx7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-dz9h5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-gjjct -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-22sx7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-dz9h5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-gjjct -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-22sx7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-dz9h5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-gjjct -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-22sx7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-22sx7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-dz9h5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-dz9h5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-gjjct -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 kubectl -- exec busybox-7b57f96db7-gjjct -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-906867 node add --alsologtostderr -v 5: (26.032114599s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-906867 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp testdata/cp-test.txt ha-906867:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile222027801/001/cp-test_ha-906867.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867:/home/docker/cp-test.txt ha-906867-m02:/home/docker/cp-test_ha-906867_ha-906867-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m02 "sudo cat /home/docker/cp-test_ha-906867_ha-906867-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867:/home/docker/cp-test.txt ha-906867-m03:/home/docker/cp-test_ha-906867_ha-906867-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m03 "sudo cat /home/docker/cp-test_ha-906867_ha-906867-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867:/home/docker/cp-test.txt ha-906867-m04:/home/docker/cp-test_ha-906867_ha-906867-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m04 "sudo cat /home/docker/cp-test_ha-906867_ha-906867-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp testdata/cp-test.txt ha-906867-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile222027801/001/cp-test_ha-906867-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867-m02:/home/docker/cp-test.txt ha-906867:/home/docker/cp-test_ha-906867-m02_ha-906867.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867 "sudo cat /home/docker/cp-test_ha-906867-m02_ha-906867.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867-m02:/home/docker/cp-test.txt ha-906867-m03:/home/docker/cp-test_ha-906867-m02_ha-906867-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m03 "sudo cat /home/docker/cp-test_ha-906867-m02_ha-906867-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867-m02:/home/docker/cp-test.txt ha-906867-m04:/home/docker/cp-test_ha-906867-m02_ha-906867-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m04 "sudo cat /home/docker/cp-test_ha-906867-m02_ha-906867-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp testdata/cp-test.txt ha-906867-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile222027801/001/cp-test_ha-906867-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867-m03:/home/docker/cp-test.txt ha-906867:/home/docker/cp-test_ha-906867-m03_ha-906867.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867 "sudo cat /home/docker/cp-test_ha-906867-m03_ha-906867.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867-m03:/home/docker/cp-test.txt ha-906867-m02:/home/docker/cp-test_ha-906867-m03_ha-906867-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m02 "sudo cat /home/docker/cp-test_ha-906867-m03_ha-906867-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867-m03:/home/docker/cp-test.txt ha-906867-m04:/home/docker/cp-test_ha-906867-m03_ha-906867-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m04 "sudo cat /home/docker/cp-test_ha-906867-m03_ha-906867-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp testdata/cp-test.txt ha-906867-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile222027801/001/cp-test_ha-906867-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867-m04:/home/docker/cp-test.txt ha-906867:/home/docker/cp-test_ha-906867-m04_ha-906867.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867 "sudo cat /home/docker/cp-test_ha-906867-m04_ha-906867.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867-m04:/home/docker/cp-test.txt ha-906867-m02:/home/docker/cp-test_ha-906867-m04_ha-906867-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m02 "sudo cat /home/docker/cp-test_ha-906867-m04_ha-906867-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 cp ha-906867-m04:/home/docker/cp-test.txt ha-906867-m03:/home/docker/cp-test_ha-906867-m04_ha-906867-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 ssh -n ha-906867-m03 "sudo cat /home/docker/cp-test_ha-906867-m04_ha-906867-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-906867 node stop m02 --alsologtostderr -v 5: (13.519389567s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-906867 status --alsologtostderr -v 5: exit status 7 (676.479671ms)

                                                
                                                
-- stdout --
	ha-906867
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-906867-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-906867-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-906867-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:26:54.121858  420488 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:26:54.122000  420488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:26:54.122011  420488 out.go:374] Setting ErrFile to fd 2...
	I1019 12:26:54.122017  420488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:26:54.122230  420488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:26:54.122461  420488 out.go:368] Setting JSON to false
	I1019 12:26:54.122492  420488 mustload.go:65] Loading cluster: ha-906867
	I1019 12:26:54.122620  420488 notify.go:220] Checking for updates...
	I1019 12:26:54.122919  420488 config.go:182] Loaded profile config "ha-906867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:26:54.122941  420488 status.go:174] checking status of ha-906867 ...
	I1019 12:26:54.123377  420488 cli_runner.go:164] Run: docker container inspect ha-906867 --format={{.State.Status}}
	I1019 12:26:54.143718  420488 status.go:371] ha-906867 host status = "Running" (err=<nil>)
	I1019 12:26:54.143742  420488 host.go:66] Checking if "ha-906867" exists ...
	I1019 12:26:54.144008  420488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-906867
	I1019 12:26:54.162363  420488 host.go:66] Checking if "ha-906867" exists ...
	I1019 12:26:54.162679  420488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:26:54.162736  420488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-906867
	I1019 12:26:54.181098  420488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/ha-906867/id_rsa Username:docker}
	I1019 12:26:54.273922  420488 ssh_runner.go:195] Run: systemctl --version
	I1019 12:26:54.280082  420488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:26:54.292187  420488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:26:54.352138  420488 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-19 12:26:54.340259559 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:26:54.352695  420488 kubeconfig.go:125] found "ha-906867" server: "https://192.168.49.254:8443"
	I1019 12:26:54.352729  420488 api_server.go:166] Checking apiserver status ...
	I1019 12:26:54.352766  420488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:26:54.364523  420488 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1258/cgroup
	W1019 12:26:54.372713  420488 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1258/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:26:54.372759  420488 ssh_runner.go:195] Run: ls
	I1019 12:26:54.376317  420488 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1019 12:26:54.380298  420488 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1019 12:26:54.380317  420488 status.go:463] ha-906867 apiserver status = Running (err=<nil>)
	I1019 12:26:54.380327  420488 status.go:176] ha-906867 status: &{Name:ha-906867 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:26:54.380347  420488 status.go:174] checking status of ha-906867-m02 ...
	I1019 12:26:54.380601  420488 cli_runner.go:164] Run: docker container inspect ha-906867-m02 --format={{.State.Status}}
	I1019 12:26:54.398977  420488 status.go:371] ha-906867-m02 host status = "Stopped" (err=<nil>)
	I1019 12:26:54.398997  420488 status.go:384] host is not running, skipping remaining checks
	I1019 12:26:54.399003  420488 status.go:176] ha-906867-m02 status: &{Name:ha-906867-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:26:54.399022  420488 status.go:174] checking status of ha-906867-m03 ...
	I1019 12:26:54.399266  420488 cli_runner.go:164] Run: docker container inspect ha-906867-m03 --format={{.State.Status}}
	I1019 12:26:54.417690  420488 status.go:371] ha-906867-m03 host status = "Running" (err=<nil>)
	I1019 12:26:54.417712  420488 host.go:66] Checking if "ha-906867-m03" exists ...
	I1019 12:26:54.417949  420488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-906867-m03
	I1019 12:26:54.435880  420488 host.go:66] Checking if "ha-906867-m03" exists ...
	I1019 12:26:54.436222  420488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:26:54.436314  420488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-906867-m03
	I1019 12:26:54.454381  420488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/ha-906867-m03/id_rsa Username:docker}
	I1019 12:26:54.548894  420488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:26:54.561472  420488 kubeconfig.go:125] found "ha-906867" server: "https://192.168.49.254:8443"
	I1019 12:26:54.561501  420488 api_server.go:166] Checking apiserver status ...
	I1019 12:26:54.561560  420488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:26:54.572450  420488 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	W1019 12:26:54.580709  420488 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:26:54.580765  420488 ssh_runner.go:195] Run: ls
	I1019 12:26:54.584430  420488 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1019 12:26:54.588472  420488 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1019 12:26:54.588498  420488 status.go:463] ha-906867-m03 apiserver status = Running (err=<nil>)
	I1019 12:26:54.588509  420488 status.go:176] ha-906867-m03 status: &{Name:ha-906867-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:26:54.588533  420488 status.go:174] checking status of ha-906867-m04 ...
	I1019 12:26:54.588820  420488 cli_runner.go:164] Run: docker container inspect ha-906867-m04 --format={{.State.Status}}
	I1019 12:26:54.606527  420488 status.go:371] ha-906867-m04 host status = "Running" (err=<nil>)
	I1019 12:26:54.606551  420488 host.go:66] Checking if "ha-906867-m04" exists ...
	I1019 12:26:54.606782  420488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-906867-m04
	I1019 12:26:54.624150  420488 host.go:66] Checking if "ha-906867-m04" exists ...
	I1019 12:26:54.624456  420488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:26:54.624533  420488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-906867-m04
	I1019 12:26:54.642382  420488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/ha-906867-m04/id_rsa Username:docker}
	I1019 12:26:54.735669  420488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:26:54.747856  420488 status.go:176] ha-906867-m04 status: &{Name:ha-906867-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-906867 node start m02 --alsologtostderr -v 5: (13.908068063s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-906867 stop --alsologtostderr -v 5: (49.262646969s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 start --wait true --alsologtostderr -v 5
E1019 12:28:02.889950  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-906867 start --wait true --alsologtostderr -v 5: (56.833341505s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 node delete m03 --alsologtostderr -v 5
E1019 12:28:57.423381  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:28:57.429835  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:28:57.441387  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:28:57.463060  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:28:57.505080  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:28:57.586588  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:28:57.748741  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:28:58.070661  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:28:58.712551  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:28:59.994680  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:29:02.557522  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-906867 node delete m03 --alsologtostderr -v 5: (9.746326904s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 status --alsologtostderr -v 5
E1019 12:29:07.679442  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 stop --alsologtostderr -v 5
E1019 12:29:17.921187  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:29:25.960583  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:29:38.403339  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-906867 stop --alsologtostderr -v 5: (41.546219005s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-906867 status --alsologtostderr -v 5: exit status 7 (105.317699ms)

                                                
                                                
-- stdout --
	ha-906867
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-906867-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-906867-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:29:50.186015  434516 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:29:50.186288  434516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:29:50.186299  434516 out.go:374] Setting ErrFile to fd 2...
	I1019 12:29:50.186303  434516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:29:50.186529  434516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:29:50.186725  434516 out.go:368] Setting JSON to false
	I1019 12:29:50.186750  434516 mustload.go:65] Loading cluster: ha-906867
	I1019 12:29:50.186914  434516 notify.go:220] Checking for updates...
	I1019 12:29:50.187303  434516 config.go:182] Loaded profile config "ha-906867": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:29:50.187327  434516 status.go:174] checking status of ha-906867 ...
	I1019 12:29:50.187846  434516 cli_runner.go:164] Run: docker container inspect ha-906867 --format={{.State.Status}}
	I1019 12:29:50.206392  434516 status.go:371] ha-906867 host status = "Stopped" (err=<nil>)
	I1019 12:29:50.206444  434516 status.go:384] host is not running, skipping remaining checks
	I1019 12:29:50.206457  434516 status.go:176] ha-906867 status: &{Name:ha-906867 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:29:50.206498  434516 status.go:174] checking status of ha-906867-m02 ...
	I1019 12:29:50.206790  434516 cli_runner.go:164] Run: docker container inspect ha-906867-m02 --format={{.State.Status}}
	I1019 12:29:50.224503  434516 status.go:371] ha-906867-m02 host status = "Stopped" (err=<nil>)
	I1019 12:29:50.224534  434516 status.go:384] host is not running, skipping remaining checks
	I1019 12:29:50.224541  434516 status.go:176] ha-906867-m02 status: &{Name:ha-906867-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:29:50.224564  434516 status.go:174] checking status of ha-906867-m04 ...
	I1019 12:29:50.224833  434516 cli_runner.go:164] Run: docker container inspect ha-906867-m04 --format={{.State.Status}}
	I1019 12:29:50.242563  434516 status.go:371] ha-906867-m04 host status = "Stopped" (err=<nil>)
	I1019 12:29:50.242592  434516 status.go:384] host is not running, skipping remaining checks
	I1019 12:29:50.242601  434516 status.go:176] ha-906867-m04 status: &{Name:ha-906867-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (51.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1019 12:30:19.365152  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-906867 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (50.979421086s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (51.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-906867 node add --control-plane --alsologtostderr -v 5: (34.263857579s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-906867 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-071159 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1019 12:31:41.286566  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-071159 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (38.846325027s)
--- PASS: TestJSONOutput/start/Command (38.85s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.13s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-071159 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-071159 --output=json --user=testUser: (6.126685102s)
--- PASS: TestJSONOutput/stop/Command (6.13s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-064876 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-064876 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (67.358977ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a4b7763e-783b-40a1-8331-577e82967221","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-064876] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f886abe-9db5-4851-9d70-6fa39dd8dd35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21772"}}
	{"specversion":"1.0","id":"f2c30afb-f090-475c-bf0c-01a96fc187e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c1bc5992-5d7f-48d1-b0de-0b676f30494b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig"}}
	{"specversion":"1.0","id":"77827aea-8a8d-49df-9652-9f875ba2f0dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube"}}
	{"specversion":"1.0","id":"a67823b7-b2f9-47f7-ad79-f2d02deda4dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"605a056a-6112-4c21-beef-4bf9b2dae6c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e616140a-1adf-4d52-9641-b227434be092","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-064876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-064876
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-383775 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-383775 --network=: (25.477765818s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-383775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-383775
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-383775: (2.126241763s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.62s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.12s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-620032 --network=bridge
E1019 12:33:02.893598  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-620032 --network=bridge: (22.115726444s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-620032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-620032
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-620032: (1.984709777s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.12s)

                                                
                                    
x
+
TestKicExistingNetwork (26.86s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1019 12:33:12.271694  355262 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1019 12:33:12.288708  355262 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1019 12:33:12.288780  355262 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1019 12:33:12.288818  355262 cli_runner.go:164] Run: docker network inspect existing-network
W1019 12:33:12.305156  355262 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1019 12:33:12.305188  355262 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1019 12:33:12.305200  355262 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1019 12:33:12.305318  355262 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1019 12:33:12.321550  355262 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a4629926c406 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:8c:3f:62:13:f6} reservation:<nil>}
I1019 12:33:12.321978  355262 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018fb3e0}
I1019 12:33:12.322008  355262 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1019 12:33:12.322067  355262 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1019 12:33:12.374939  355262 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-963844 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-963844 --network=existing-network: (24.75170305s)
helpers_test.go:175: Cleaning up "existing-network-963844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-963844
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-963844: (1.965261754s)
I1019 12:33:39.109120  355262 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.86s)

                                                
                                    
x
+
TestKicCustomSubnet (23.98s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-669909 --subnet=192.168.60.0/24
E1019 12:33:57.426470  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-669909 --subnet=192.168.60.0/24: (21.851570212s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-669909 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-669909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-669909
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-669909: (2.111345409s)
--- PASS: TestKicCustomSubnet (23.98s)

                                                
                                    
x
+
TestKicStaticIP (24.51s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-129654 --static-ip=192.168.200.200
E1019 12:34:25.128736  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-129654 --static-ip=192.168.200.200: (22.263166318s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-129654 ip
helpers_test.go:175: Cleaning up "static-ip-129654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-129654
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-129654: (2.115183281s)
--- PASS: TestKicStaticIP (24.51s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (47.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-966357 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-966357 --driver=docker  --container-runtime=crio: (19.302310552s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-969203 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-969203 --driver=docker  --container-runtime=crio: (22.053452942s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-966357
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-969203
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-969203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-969203
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-969203: (2.377025629s)
helpers_test.go:175: Cleaning up "first-966357" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-966357
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-966357: (2.351397903s)
--- PASS: TestMinikubeProfile (47.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-860933 --memory=3072 --mount-string /tmp/TestMountStartserial3210431501/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-860933 --memory=3072 --mount-string /tmp/TestMountStartserial3210431501/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.276816456s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-860933 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-874782 --memory=3072 --mount-string /tmp/TestMountStartserial3210431501/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-874782 --memory=3072 --mount-string /tmp/TestMountStartserial3210431501/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.343448058s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-874782 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-860933 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-860933 --alsologtostderr -v=5: (1.694005318s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-874782 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-874782
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-874782: (1.242148051s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.52s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-874782
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-874782: (6.518327392s)
--- PASS: TestMountStart/serial/RestartStopped (7.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-874782 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (60.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-871613 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-871613 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m0.238762308s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (60.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-871613 -- rollout status deployment/busybox: (2.050472788s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- exec busybox-7b57f96db7-mjcpv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- exec busybox-7b57f96db7-td5gl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- exec busybox-7b57f96db7-mjcpv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- exec busybox-7b57f96db7-td5gl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- exec busybox-7b57f96db7-mjcpv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- exec busybox-7b57f96db7-td5gl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- exec busybox-7b57f96db7-mjcpv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- exec busybox-7b57f96db7-mjcpv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- exec busybox-7b57f96db7-td5gl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-871613 -- exec busybox-7b57f96db7-td5gl -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-871613 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-871613 -v=5 --alsologtostderr: (53.160905618s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.79s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-871613 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 cp testdata/cp-test.txt multinode-871613:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 cp multinode-871613:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2271593606/001/cp-test_multinode-871613.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 cp multinode-871613:/home/docker/cp-test.txt multinode-871613-m02:/home/docker/cp-test_multinode-871613_multinode-871613-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613-m02 "sudo cat /home/docker/cp-test_multinode-871613_multinode-871613-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 cp multinode-871613:/home/docker/cp-test.txt multinode-871613-m03:/home/docker/cp-test_multinode-871613_multinode-871613-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613-m03 "sudo cat /home/docker/cp-test_multinode-871613_multinode-871613-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 cp testdata/cp-test.txt multinode-871613-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 cp multinode-871613-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2271593606/001/cp-test_multinode-871613-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 cp multinode-871613-m02:/home/docker/cp-test.txt multinode-871613:/home/docker/cp-test_multinode-871613-m02_multinode-871613.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613 "sudo cat /home/docker/cp-test_multinode-871613-m02_multinode-871613.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 cp multinode-871613-m02:/home/docker/cp-test.txt multinode-871613-m03:/home/docker/cp-test_multinode-871613-m02_multinode-871613-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613-m03 "sudo cat /home/docker/cp-test_multinode-871613-m02_multinode-871613-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 cp testdata/cp-test.txt multinode-871613-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 cp multinode-871613-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2271593606/001/cp-test_multinode-871613-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 cp multinode-871613-m03:/home/docker/cp-test.txt multinode-871613:/home/docker/cp-test_multinode-871613-m03_multinode-871613.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613 "sudo cat /home/docker/cp-test_multinode-871613-m03_multinode-871613.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 cp multinode-871613-m03:/home/docker/cp-test.txt multinode-871613-m02:/home/docker/cp-test_multinode-871613-m03_multinode-871613-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 ssh -n multinode-871613-m02 "sudo cat /home/docker/cp-test_multinode-871613-m03_multinode-871613-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-871613 node stop m03: (1.258842837s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-871613 status: exit status 7 (480.859757ms)

                                                
                                                
-- stdout --
	multinode-871613
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-871613-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-871613-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-871613 status --alsologtostderr: exit status 7 (478.133348ms)

                                                
                                                
-- stdout --
	multinode-871613
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-871613-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-871613-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:37:49.283393  494126 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:37:49.283660  494126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:37:49.283670  494126 out.go:374] Setting ErrFile to fd 2...
	I1019 12:37:49.283674  494126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:37:49.283895  494126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:37:49.284062  494126 out.go:368] Setting JSON to false
	I1019 12:37:49.284086  494126 mustload.go:65] Loading cluster: multinode-871613
	I1019 12:37:49.284203  494126 notify.go:220] Checking for updates...
	I1019 12:37:49.284444  494126 config.go:182] Loaded profile config "multinode-871613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:37:49.284461  494126 status.go:174] checking status of multinode-871613 ...
	I1019 12:37:49.284893  494126 cli_runner.go:164] Run: docker container inspect multinode-871613 --format={{.State.Status}}
	I1019 12:37:49.302206  494126 status.go:371] multinode-871613 host status = "Running" (err=<nil>)
	I1019 12:37:49.302230  494126 host.go:66] Checking if "multinode-871613" exists ...
	I1019 12:37:49.302543  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-871613
	I1019 12:37:49.320903  494126 host.go:66] Checking if "multinode-871613" exists ...
	I1019 12:37:49.321245  494126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:37:49.321312  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-871613
	I1019 12:37:49.338772  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/multinode-871613/id_rsa Username:docker}
	I1019 12:37:49.433068  494126 ssh_runner.go:195] Run: systemctl --version
	I1019 12:37:49.439219  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:37:49.451438  494126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:37:49.506977  494126 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-19 12:37:49.496479792 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:37:49.507524  494126 kubeconfig.go:125] found "multinode-871613" server: "https://192.168.67.2:8443"
	I1019 12:37:49.507560  494126 api_server.go:166] Checking apiserver status ...
	I1019 12:37:49.507597  494126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 12:37:49.519095  494126 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1237/cgroup
	W1019 12:37:49.527344  494126 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1237/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 12:37:49.527386  494126 ssh_runner.go:195] Run: ls
	I1019 12:37:49.530958  494126 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1019 12:37:49.535944  494126 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1019 12:37:49.535965  494126 status.go:463] multinode-871613 apiserver status = Running (err=<nil>)
	I1019 12:37:49.535975  494126 status.go:176] multinode-871613 status: &{Name:multinode-871613 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:37:49.536000  494126 status.go:174] checking status of multinode-871613-m02 ...
	I1019 12:37:49.536261  494126 cli_runner.go:164] Run: docker container inspect multinode-871613-m02 --format={{.State.Status}}
	I1019 12:37:49.553235  494126 status.go:371] multinode-871613-m02 host status = "Running" (err=<nil>)
	I1019 12:37:49.553258  494126 host.go:66] Checking if "multinode-871613-m02" exists ...
	I1019 12:37:49.553541  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-871613-m02
	I1019 12:37:49.570770  494126 host.go:66] Checking if "multinode-871613-m02" exists ...
	I1019 12:37:49.571021  494126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 12:37:49.571057  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-871613-m02
	I1019 12:37:49.588026  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33280 SSHKeyPath:/home/jenkins/minikube-integration/21772-351705/.minikube/machines/multinode-871613-m02/id_rsa Username:docker}
	I1019 12:37:49.680582  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 12:37:49.693064  494126 status.go:176] multinode-871613-m02 status: &{Name:multinode-871613-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:37:49.693106  494126 status.go:174] checking status of multinode-871613-m03 ...
	I1019 12:37:49.693366  494126 cli_runner.go:164] Run: docker container inspect multinode-871613-m03 --format={{.State.Status}}
	I1019 12:37:49.711525  494126 status.go:371] multinode-871613-m03 host status = "Stopped" (err=<nil>)
	I1019 12:37:49.711551  494126 status.go:384] host is not running, skipping remaining checks
	I1019 12:37:49.711559  494126 status.go:176] multinode-871613-m03 status: &{Name:multinode-871613-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-871613 node start m03 -v=5 --alsologtostderr: (6.474559953s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-871613
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-871613
E1019 12:38:02.890498  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-871613: (29.463198572s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-871613 --wait=true -v=5 --alsologtostderr
E1019 12:38:57.423495  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-871613 --wait=true -v=5 --alsologtostderr: (52.543109708s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-871613
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-871613 node delete m03: (4.634658805s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-871613 stop: (28.259562357s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-871613 status: exit status 7 (87.786473ms)

                                                
                                                
-- stdout --
	multinode-871613
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-871613-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-871613 status --alsologtostderr: exit status 7 (88.394437ms)

                                                
                                                
-- stdout --
	multinode-871613
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-871613-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:39:52.586976  503853 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:39:52.587254  503853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:39:52.587265  503853 out.go:374] Setting ErrFile to fd 2...
	I1019 12:39:52.587269  503853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:39:52.587484  503853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:39:52.587667  503853 out.go:368] Setting JSON to false
	I1019 12:39:52.587693  503853 mustload.go:65] Loading cluster: multinode-871613
	I1019 12:39:52.587826  503853 notify.go:220] Checking for updates...
	I1019 12:39:52.588061  503853 config.go:182] Loaded profile config "multinode-871613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:39:52.588075  503853 status.go:174] checking status of multinode-871613 ...
	I1019 12:39:52.588542  503853 cli_runner.go:164] Run: docker container inspect multinode-871613 --format={{.State.Status}}
	I1019 12:39:52.608864  503853 status.go:371] multinode-871613 host status = "Stopped" (err=<nil>)
	I1019 12:39:52.608913  503853 status.go:384] host is not running, skipping remaining checks
	I1019 12:39:52.608920  503853 status.go:176] multinode-871613 status: &{Name:multinode-871613 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 12:39:52.608947  503853 status.go:174] checking status of multinode-871613-m02 ...
	I1019 12:39:52.609208  503853 cli_runner.go:164] Run: docker container inspect multinode-871613-m02 --format={{.State.Status}}
	I1019 12:39:52.627216  503853 status.go:371] multinode-871613-m02 host status = "Stopped" (err=<nil>)
	I1019 12:39:52.627236  503853 status.go:384] host is not running, skipping remaining checks
	I1019 12:39:52.627241  503853 status.go:176] multinode-871613-m02 status: &{Name:multinode-871613-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (27.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-871613 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-871613 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (26.773818683s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-871613 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (27.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-871613
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-871613-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-871613-m02 --driver=docker  --container-runtime=crio: exit status 14 (68.535115ms)

                                                
                                                
-- stdout --
	* [multinode-871613-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-871613-m02' is duplicated with machine name 'multinode-871613-m02' in profile 'multinode-871613'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-871613-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-871613-m03 --driver=docker  --container-runtime=crio: (23.952544738s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-871613
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-871613: exit status 80 (276.244907ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-871613 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-871613-m03 already exists in multinode-871613-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-871613-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-871613-m03: (2.370948842s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.72s)

                                                
                                    
x
+
TestPreload (105.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-130190 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-130190 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (49.549431864s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-130190 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-130190
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-130190: (5.803410326s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-130190 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-130190 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.670559229s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-130190 image list
helpers_test.go:175: Cleaning up "test-preload-130190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-130190
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-130190: (2.393500838s)
--- PASS: TestPreload (105.56s)

                                                
                                    
x
+
TestScheduledStopUnix (96.29s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-282582 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-282582 --memory=3072 --driver=docker  --container-runtime=crio: (20.057999528s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-282582 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-282582 -n scheduled-stop-282582
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-282582 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1019 12:42:56.920549  355262 retry.go:31] will retry after 88.256µs: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.921706  355262 retry.go:31] will retry after 171.55µs: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.922876  355262 retry.go:31] will retry after 281.27µs: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.924051  355262 retry.go:31] will retry after 404.953µs: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.925161  355262 retry.go:31] will retry after 398.465µs: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.926309  355262 retry.go:31] will retry after 1.09184ms: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.928487  355262 retry.go:31] will retry after 1.386238ms: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.930713  355262 retry.go:31] will retry after 1.650357ms: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.932913  355262 retry.go:31] will retry after 1.463735ms: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.935153  355262 retry.go:31] will retry after 5.586779ms: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.941586  355262 retry.go:31] will retry after 8.59829ms: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.950826  355262 retry.go:31] will retry after 11.678807ms: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.963039  355262 retry.go:31] will retry after 8.450568ms: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.972265  355262 retry.go:31] will retry after 18.114321ms: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
I1019 12:42:56.990833  355262 retry.go:31] will retry after 33.330808ms: open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/scheduled-stop-282582/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-282582 --cancel-scheduled
E1019 12:43:02.888627  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-282582 -n scheduled-stop-282582
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-282582
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-282582 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1019 12:43:57.427145  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-282582
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-282582: exit status 7 (71.877304ms)

                                                
                                                
-- stdout --
	scheduled-stop-282582
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-282582 -n scheduled-stop-282582
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-282582 -n scheduled-stop-282582: exit status 7 (67.834101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-282582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-282582
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-282582: (4.853709266s)
--- PASS: TestScheduledStopUnix (96.29s)

                                                
                                    
x
+
TestInsufficientStorage (9.68s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-332992 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-332992 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.216880275s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b62480d3-ff3a-4280-8bbc-a2b81d367a7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-332992] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"533f8204-fa0b-4e45-b3c0-826467a1eeac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21772"}}
	{"specversion":"1.0","id":"45e91842-2f53-4438-92ac-923fc5e62b6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c1c986d7-6f2f-46a9-ab4c-8a6ff48b3a3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig"}}
	{"specversion":"1.0","id":"2192ff80-9dc1-40f8-b1bf-633b89b51d5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube"}}
	{"specversion":"1.0","id":"f308805f-8b88-403c-a241-af5e9dd06a13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"73c894d5-c629-4b3d-8ea2-cca5deda6104","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a76de330-544b-4789-b99e-ecb42f7f039c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e7d9b8bd-85ed-4c15-bfe7-c8556f56a688","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"82111826-3600-4917-9d79-ba20783e4b29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7e584fd0-1965-4550-bcfe-e21d922b1fc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ff76500f-2209-4820-8876-0dd78f1692b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-332992\" primary control-plane node in \"insufficient-storage-332992\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d70f3a8-8a1c-44ba-8adb-6b811e738117","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c606e8b0-c783-407d-a346-453fca2faf3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd3b0787-55b9-4510-88e4-df4de3343544","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-332992 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-332992 --output=json --layout=cluster: exit status 7 (278.489924ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-332992","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-332992","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 12:44:20.224304  524029 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-332992" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-332992 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-332992 --output=json --layout=cluster: exit status 7 (280.492518ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-332992","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-332992","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1019 12:44:20.504769  524138 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-332992" does not appear in /home/jenkins/minikube-integration/21772-351705/kubeconfig
	E1019 12:44:20.515385  524138 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/insufficient-storage-332992/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-332992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-332992
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-332992: (1.903500626s)
--- PASS: TestInsufficientStorage (9.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (48.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1165223281 start -p running-upgrade-188277 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1165223281 start -p running-upgrade-188277 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.058878819s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-188277 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-188277 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.374666144s)
helpers_test.go:175: Cleaning up "running-upgrade-188277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-188277
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-188277: (2.366769615s)
--- PASS: TestRunningBinaryUpgrade (48.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (317.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-566686 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-566686 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.838029424s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-566686
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-566686: (1.317391898s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-566686 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-566686 status --format={{.Host}}: exit status 7 (104.87167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-566686 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-566686 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.818998389s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-566686 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-566686 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-566686 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (102.971036ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-566686] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-566686
	    minikube start -p kubernetes-upgrade-566686 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5666862 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-566686 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-566686 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-566686 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.630471743s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-566686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-566686
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-566686: (2.446689705s)
--- PASS: TestKubernetesUpgrade (317.34s)

                                                
                                    
x
+
TestMissingContainerUpgrade (90.13s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.406993427 start -p missing-upgrade-408940 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.406993427 start -p missing-upgrade-408940 --memory=3072 --driver=docker  --container-runtime=crio: (43.605074827s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-408940
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-408940: (1.752032817s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-408940
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-408940 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-408940 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.173425765s)
helpers_test.go:175: Cleaning up "missing-upgrade-408940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-408940
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-408940: (2.045053787s)
--- PASS: TestMissingContainerUpgrade (90.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (73.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3078012303 start -p stopped-upgrade-645762 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3078012303 start -p stopped-upgrade-645762 --memory=3072 --vm-driver=docker  --container-runtime=crio: (43.640093714s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3078012303 -p stopped-upgrade-645762 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3078012303 -p stopped-upgrade-645762 stop: (12.337313498s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-645762 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1019 12:45:20.490103  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-645762 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.552335101s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (73.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-645762
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-645762: (1.074659574s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-931932 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-931932 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (181.292331ms)

                                                
                                                
-- stdout --
	* [false-931932] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 12:46:11.735910  554460 out.go:360] Setting OutFile to fd 1 ...
	I1019 12:46:11.736198  554460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:46:11.736214  554460 out.go:374] Setting ErrFile to fd 2...
	I1019 12:46:11.736221  554460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 12:46:11.736545  554460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-351705/.minikube/bin
	I1019 12:46:11.737040  554460 out.go:368] Setting JSON to false
	I1019 12:46:11.738448  554460 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8920,"bootTime":1760869052,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 12:46:11.738548  554460 start.go:141] virtualization: kvm guest
	I1019 12:46:11.740514  554460 out.go:179] * [false-931932] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 12:46:11.741785  554460 out.go:179]   - MINIKUBE_LOCATION=21772
	I1019 12:46:11.741792  554460 notify.go:220] Checking for updates...
	I1019 12:46:11.743103  554460 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 12:46:11.744728  554460 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	I1019 12:46:11.746047  554460 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	I1019 12:46:11.747334  554460 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 12:46:11.748611  554460 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 12:46:11.751262  554460 config.go:182] Loaded profile config "cert-expiration-599351": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:46:11.751406  554460 config.go:182] Loaded profile config "cert-options-868990": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:46:11.751549  554460 config.go:182] Loaded profile config "kubernetes-upgrade-566686": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 12:46:11.751675  554460 driver.go:421] Setting default libvirt URI to qemu:///system
	I1019 12:46:11.782287  554460 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1019 12:46:11.782408  554460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1019 12:46:11.853001  554460 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-19 12:46:11.841011944 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1019 12:46:11.853157  554460 docker.go:318] overlay module found
	I1019 12:46:11.854776  554460 out.go:179] * Using the docker driver based on user configuration
	I1019 12:46:11.856023  554460 start.go:305] selected driver: docker
	I1019 12:46:11.856039  554460 start.go:925] validating driver "docker" against <nil>
	I1019 12:46:11.856050  554460 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 12:46:11.857941  554460 out.go:203] 
	W1019 12:46:11.859131  554460 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1019 12:46:11.860301  554460 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-931932 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-931932

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-931932

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-931932

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-931932

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-931932

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-931932

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-931932

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-931932

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-931932

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-931932

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-931932

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-931932" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-931932" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 12:45:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-566686
contexts:
- context:
cluster: kubernetes-upgrade-566686
user: kubernetes-upgrade-566686
name: kubernetes-upgrade-566686
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-566686
user:
client-certificate: /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/kubernetes-upgrade-566686/client.crt
client-key: /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/kubernetes-upgrade-566686/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-931932

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-931932"

                                                
                                                
----------------------- debugLogs end: false-931932 [took: 3.238822926s] --------------------------------
helpers_test.go:175: Cleaning up "false-931932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-931932
--- PASS: TestNetworkPlugins/group/false (3.58s)

                                                
                                    
x
+
TestPause/serial/Start (39.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-513789 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-513789 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (39.863942879s)
--- PASS: TestPause/serial/Start (39.86s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.91s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-513789 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-513789 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.903824185s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-352361 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-352361 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (68.54517ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-352361] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-351705/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-351705/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-352361 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-352361 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.616864568s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-352361 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.627139632s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-352361 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-352361 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.516300073s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-352361 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-352361 status -o json: exit status 2 (308.485275ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-352361","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-352361
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-352361: (2.011548786s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-352361 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-352361 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.866579931s)
--- PASS: TestNoKubernetes/serial/Start (4.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-352361 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-352361 "sudo systemctl is-active --quiet service kubelet": exit status 1 (262.54743ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-352361
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-352361: (1.254614877s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-352361 --driver=docker  --container-runtime=crio
E1019 12:48:02.888665  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/addons-042725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-352361 --driver=docker  --container-runtime=crio: (6.396291779s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-352361 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-352361 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.198829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-931932 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-931932 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jwtf6" [98a24dc5-4d94-44ce-b032-8d73ec2f32bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jwtf6" [98a24dc5-4d94-44ce-b032-8d73ec2f32bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.002668081s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.270611123s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-931932 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.033650719s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-bc4tx" [91eea458-7345-4449-9060-42ec09c96f1d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004781338s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-931932 "pgrep -a kubelet"
I1019 12:48:54.417850  355262 config.go:182] Loaded profile config "kindnet-931932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-931932 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gvxb9" [c130e0b9-42ac-4343-839b-6de98d074eff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1019 12:48:57.423010  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-gvxb9" [c130e0b9-42ac-4343-839b-6de98d074eff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003521651s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-931932 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (49.677011689s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-xw7lq" [51ac0234-0ee5-4e05-b137-08d8c0aee777] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003827268s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (42.36756366s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-931932 "pgrep -a kubelet"
I1019 12:49:30.338816  355262 config.go:182] Loaded profile config "calico-931932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-931932 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fjgbp" [b3d1c062-9dd2-477d-87de-7dcffc534edd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fjgbp" [b3d1c062-9dd2-477d-87de-7dcffc534edd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004193121s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-931932 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (49.274693503s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-931932 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.301736716s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-931932 "pgrep -a kubelet"
I1019 12:50:07.155872  355262 config.go:182] Loaded profile config "enable-default-cni-931932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-931932 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vztf8" [3c2e296d-107b-4e4a-8520-daa9b3b30e9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vztf8" [3c2e296d-107b-4e4a-8520-daa9b3b30e9f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003532487s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-931932 "pgrep -a kubelet"
I1019 12:50:13.418870  355262 config.go:182] Loaded profile config "custom-flannel-931932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-931932 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f2fl2" [de4aac2b-b5d0-4086-bb50-9b5064d0827c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f2fl2" [de4aac2b-b5d0-4086-bb50-9b5064d0827c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003650293s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-931932 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-931932 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-rdhvw" [e62b0ca7-97ee-4c1a-90c8-d8f2d80b2e0f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003328363s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-931932 "pgrep -a kubelet"
I1019 12:50:35.366978  355262 config.go:182] Loaded profile config "flannel-931932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-931932 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hl6ft" [18e2cd25-46ea-4e3e-abc6-6ffcddc5af65] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hl6ft" [18e2cd25-46ea-4e3e-abc6-6ffcddc5af65] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004543965s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.9242428s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-931932 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.577021259s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m11.644929749s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-931932 "pgrep -a kubelet"
I1019 12:51:09.489629  355262 config.go:182] Loaded profile config "bridge-931932": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-931932 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ct22q" [6259523f-52f0-4b36-9f83-abec60015758] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ct22q" [6259523f-52f0-4b36-9f83-abec60015758] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003905888s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-931932 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-931932 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E1019 12:53:15.711159  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/auto-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:53:25.952656  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/auto-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-577062 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e374ff62-1a16-4b52-84da-3d26c90172cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e374ff62-1a16-4b52-84da-3d26c90172cf] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004432224s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-577062 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-561408 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ef865d00-0bef-4438-9c22-1892d84e64cb] Pending
helpers_test.go:352: "busybox" [ef865d00-0bef-4438-9c22-1892d84e64cb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ef865d00-0bef-4438-9c22-1892d84e64cb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003365361s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-561408 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-577062 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-577062 --alsologtostderr -v=3: (16.911743699s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (39.520527399s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-561408 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-561408 --alsologtostderr -v=3: (18.073478196s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-577062 -n old-k8s-version-577062
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-577062 -n old-k8s-version-577062: exit status 7 (87.466206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-577062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-577062 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.780830085s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-577062 -n old-k8s-version-577062
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-561408 -n no-preload-561408
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-561408 -n no-preload-561408: exit status 7 (93.963862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-561408 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (43.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-561408 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.316046189s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-561408 -n no-preload-561408
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (43.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-123864 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [113fedc6-dd5a-4b53-873c-ed685ea5ed9c] Pending
helpers_test.go:352: "busybox" [113fedc6-dd5a-4b53-873c-ed685ea5ed9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [113fedc6-dd5a-4b53-873c-ed685ea5ed9c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004687894s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-123864 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-999693 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d1a7398f-f723-4f73-93f3-8aafc8fb32c1] Pending
helpers_test.go:352: "busybox" [d1a7398f-f723-4f73-93f3-8aafc8fb32c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d1a7398f-f723-4f73-93f3-8aafc8fb32c1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00429259s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-999693 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-123864 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-123864 --alsologtostderr -v=3: (16.108208231s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-999693 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-999693 --alsologtostderr -v=3: (16.203792695s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-123864 -n embed-certs-123864
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-123864 -n embed-certs-123864: exit status 7 (69.879309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-123864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-123864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.738727601s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-123864 -n embed-certs-123864
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693: exit status 7 (76.18276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-999693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-999693 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.267724401s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-999693 -n default-k8s-diff-port-999693
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hm7lm" [07c4ccb8-982b-4055-8676-f081e5190ce4] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004091765s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4xrjh" [319a68f4-f2f5-4163-af82-7420a9bd1a41] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011719775s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4xrjh" [319a68f4-f2f5-4163-af82-7420a9bd1a41] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006272071s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-577062 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hm7lm" [07c4ccb8-982b-4055-8676-f081e5190ce4] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005372192s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-561408 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-577062 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-561408 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (25.067781018s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b55t5" [2677e6ff-bf6f-4e47-acea-acc1cfbc5c26] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003228368s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b55t5" [2677e6ff-bf6f-4e47-acea-acc1cfbc5c26] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003754754s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-123864 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bv5k2" [ffe96798-7c36-44e9-9226-0fea7d9cba29] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003529807s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (17.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-190708 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-190708 --alsologtostderr -v=3: (17.938300758s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (17.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-123864 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bv5k2" [ffe96798-7c36-44e9-9226-0fea7d9cba29] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00336801s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-999693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-999693 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190708 -n newest-cni-190708
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190708 -n newest-cni-190708: exit status 7 (69.941794ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-190708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1019 12:53:57.423323  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/functional-688409/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 12:53:58.348097  355262 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/kindnet-931932/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-190708 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.082018895s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190708 -n newest-cni-190708
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-190708 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    

Test skip (26/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-931932 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-931932

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-931932

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-931932

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-931932

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-931932

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-931932

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-931932

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-931932

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-931932

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-931932

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-931932

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-931932" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-931932" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 12:45:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-566686
contexts:
- context:
cluster: kubernetes-upgrade-566686
user: kubernetes-upgrade-566686
name: kubernetes-upgrade-566686
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-566686
user:
client-certificate: /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/kubernetes-upgrade-566686/client.crt
client-key: /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/kubernetes-upgrade-566686/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-931932

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-931932"

                                                
                                                
----------------------- debugLogs end: kubenet-931932 [took: 3.307937548s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-931932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-931932
--- SKIP: TestNetworkPlugins/group/kubenet (3.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-931932 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-931932" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 12:46:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: cert-expiration-599351
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-351705/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 12:45:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-566686
contexts:
- context:
cluster: cert-expiration-599351
extensions:
- extension:
last-update: Sun, 19 Oct 2025 12:46:14 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-599351
name: cert-expiration-599351
- context:
cluster: kubernetes-upgrade-566686
user: kubernetes-upgrade-566686
name: kubernetes-upgrade-566686
current-context: cert-expiration-599351
kind: Config
users:
- name: cert-expiration-599351
user:
client-certificate: /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/cert-expiration-599351/client.crt
client-key: /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/cert-expiration-599351/client.key
- name: kubernetes-upgrade-566686
user:
client-certificate: /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/kubernetes-upgrade-566686/client.crt
client-key: /home/jenkins/minikube-integration/21772-351705/.minikube/profiles/kubernetes-upgrade-566686/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-931932

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-931932" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-931932"

                                                
                                                
----------------------- debugLogs end: cilium-931932 [took: 3.56081036s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-931932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-931932
--- SKIP: TestNetworkPlugins/group/cilium (3.75s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-591165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-591165
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard